00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2008 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3269 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.147 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.148 The recommended git tool is: git 00:00:00.148 using credential 00000000-0000-0000-0000-000000000002 00:00:00.157 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.198 Fetching changes from the remote Git repository 00:00:00.200 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.233 Using shallow fetch with depth 1 00:00:00.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.233 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.534 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.543 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.553 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.553 > git config core.sparsecheckout # timeout=10 00:00:07.562 > git read-tree -mu HEAD # timeout=10 00:00:07.576 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.594 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.595 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.673 [Pipeline] Start of Pipeline 00:00:07.688 [Pipeline] library 00:00:07.690 Loading library shm_lib@master 00:00:07.690 Library shm_lib@master is cached. Copying from home. 00:00:07.703 [Pipeline] node 00:00:07.712 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:07.713 [Pipeline] { 00:00:07.721 [Pipeline] catchError 00:00:07.721 [Pipeline] { 00:00:07.733 [Pipeline] wrap 00:00:07.741 [Pipeline] { 00:00:07.748 [Pipeline] stage 00:00:07.750 [Pipeline] { (Prologue) 00:00:07.766 [Pipeline] echo 00:00:07.767 Node: VM-host-SM16 00:00:07.773 [Pipeline] cleanWs 00:00:07.780 [WS-CLEANUP] Deleting project workspace... 00:00:07.780 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.785 [WS-CLEANUP] done 00:00:07.947 [Pipeline] setCustomBuildProperty 00:00:08.028 [Pipeline] httpRequest 00:00:08.055 [Pipeline] echo 00:00:08.056 Sorcerer 10.211.164.101 is alive 00:00:08.063 [Pipeline] httpRequest 00:00:08.066 HttpMethod: GET 00:00:08.067 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.068 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.082 Response Code: HTTP/1.1 200 OK 00:00:08.082 Success: Status code 200 is in the accepted range: 200,404 00:00:08.083 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:16.201 [Pipeline] sh 00:00:16.478 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:16.494 [Pipeline] httpRequest 00:00:16.527 [Pipeline] echo 00:00:16.529 Sorcerer 10.211.164.101 is alive 00:00:16.538 [Pipeline] httpRequest 00:00:16.543 HttpMethod: GET 00:00:16.543 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:16.544 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:16.550 Response Code: HTTP/1.1 200 OK 00:00:16.551 Success: Status code 200 is in the accepted range: 200,404 00:00:16.551 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:25.757 [Pipeline] sh 00:01:26.034 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:28.579 [Pipeline] sh 00:01:28.858 + git -C spdk log --oneline -n5 00:01:28.858 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:28.858 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:28.858 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:28.858 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:28.858 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:28.879 [Pipeline] writeFile 00:01:28.898 [Pipeline] sh 00:01:29.178 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.191 [Pipeline] sh 00:01:29.499 + cat autorun-spdk.conf 00:01:29.499 SPDK_TEST_UNITTEST=1 00:01:29.499 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.499 SPDK_TEST_NVME=1 00:01:29.499 SPDK_TEST_BLOCKDEV=1 00:01:29.499 SPDK_RUN_ASAN=1 00:01:29.499 SPDK_RUN_UBSAN=1 00:01:29.499 SPDK_TEST_RAID5=1 00:01:29.499 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.544 RUN_NIGHTLY=1 00:01:29.546 [Pipeline] } 00:01:29.565 [Pipeline] // stage 00:01:29.581 [Pipeline] stage 00:01:29.583 [Pipeline] { (Run VM) 00:01:29.597 [Pipeline] sh 00:01:29.876 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.876 + echo 'Start stage prepare_nvme.sh' 00:01:29.876 Start stage prepare_nvme.sh 00:01:29.876 + [[ -n 7 ]] 00:01:29.876 + disk_prefix=ex7 00:01:29.876 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:01:29.876 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:01:29.876 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:01:29.876 ++ SPDK_TEST_UNITTEST=1 00:01:29.876 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.876 ++ SPDK_TEST_NVME=1 00:01:29.876 ++ SPDK_TEST_BLOCKDEV=1 00:01:29.876 ++ SPDK_RUN_ASAN=1 00:01:29.876 ++ SPDK_RUN_UBSAN=1 00:01:29.876 ++ SPDK_TEST_RAID5=1 00:01:29.876 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.876 ++ RUN_NIGHTLY=1 00:01:29.876 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:29.876 + nvme_files=() 00:01:29.876 + declare -A nvme_files 00:01:29.876 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.876 + nvme_files['nvme.img']=5G 00:01:29.876 + nvme_files['nvme-cmb.img']=5G 00:01:29.876 + nvme_files['nvme-multi0.img']=4G 00:01:29.876 + nvme_files['nvme-multi1.img']=4G 00:01:29.876 + nvme_files['nvme-multi2.img']=4G 00:01:29.876 + nvme_files['nvme-openstack.img']=8G 00:01:29.876 + nvme_files['nvme-zns.img']=5G 00:01:29.876 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.876 + (( SPDK_TEST_FTL == 1 )) 00:01:29.876 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.876 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.876 + for nvme in "${!nvme_files[@]}" 00:01:29.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:29.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.876 + for nvme in "${!nvme_files[@]}" 00:01:29.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:29.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.876 + for nvme in "${!nvme_files[@]}" 00:01:29.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:29.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.876 + for nvme in "${!nvme_files[@]}" 00:01:29.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:29.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.876 + for nvme in "${!nvme_files[@]}" 00:01:29.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:29.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.877 + for nvme in "${!nvme_files[@]}" 00:01:29.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:29.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.877 + for nvme in "${!nvme_files[@]}" 00:01:29.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:30.444 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.444 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:30.444 + echo 'End stage prepare_nvme.sh' 00:01:30.444 End stage prepare_nvme.sh 00:01:30.455 [Pipeline] sh 00:01:30.735 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.735 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -H -a -v -f ubuntu2004 00:01:30.735 00:01:30.735 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:01:30.735 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:01:30.735 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:01:30.735 HELP=0 00:01:30.735 DRY_RUN=0 00:01:30.735 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img, 00:01:30.735 NVME_DISKS_TYPE=nvme, 00:01:30.735 NVME_AUTO_CREATE=0 00:01:30.735 NVME_DISKS_NAMESPACES=, 00:01:30.735 NVME_CMB=, 00:01:30.735 NVME_PMR=, 00:01:30.735 NVME_ZNS=, 00:01:30.735 NVME_MS=, 00:01:30.735 NVME_FDP=, 00:01:30.735 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:30.735 SPDK_VAGRANT_VMCPU=10 00:01:30.735 SPDK_VAGRANT_VMRAM=12288 00:01:30.735 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.735 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:30.735 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.735 SPDK_OPENSTACK_NETWORK=0 00:01:30.735 VAGRANT_PACKAGE_BOX=0 00:01:30.735 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:30.735 FORCE_DISTRO=true 00:01:30.735 VAGRANT_BOX_VERSION= 00:01:30.735 EXTRA_VAGRANTFILES= 00:01:30.735 NIC_MODEL=e1000 00:01:30.735 00:01:30.735 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:01:30.735 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:33.269 Bringing machine 'default' up with 'libvirt' provider... 00:01:33.835 ==> default: Creating image (snapshot of base box volume). 00:01:33.835 ==> default: Creating domain with the following settings... 00:01:33.835 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720994972_0f8c2bdc4733f61e0397 00:01:33.835 ==> default: -- Domain type: kvm 00:01:33.835 ==> default: -- Cpus: 10 00:01:33.835 ==> default: -- Feature: acpi 00:01:33.835 ==> default: -- Feature: apic 00:01:33.835 ==> default: -- Feature: pae 00:01:33.835 ==> default: -- Memory: 12288M 00:01:33.835 ==> default: -- Memory Backing: hugepages: 00:01:33.835 ==> default: -- Management MAC: 00:01:33.835 ==> default: -- Loader: 00:01:33.835 ==> default: -- Nvram: 00:01:33.835 ==> default: -- Base box: spdk/ubuntu2004 00:01:33.835 ==> default: -- Storage pool: default 00:01:34.094 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720994972_0f8c2bdc4733f61e0397.img (20G) 00:01:34.094 ==> default: -- Volume Cache: default 00:01:34.094 ==> default: -- Kernel: 00:01:34.094 ==> default: -- Initrd: 00:01:34.094 ==> default: -- Graphics Type: vnc 00:01:34.094 ==> default: -- Graphics Port: -1 00:01:34.094 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.094 ==> default: -- Graphics Password: Not defined 00:01:34.094 ==> default: -- Video Type: cirrus 00:01:34.094 ==> default: -- Video VRAM: 9216 00:01:34.094 ==> default: -- Sound Type: 00:01:34.094 ==> default: -- Keymap: en-us 00:01:34.094 ==> default: -- TPM Path: 00:01:34.094 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.094 ==> default: -- Command line args: 00:01:34.094 ==> default: -> value=-device, 00:01:34.094 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:34.094 ==> default: -> value=-drive, 00:01:34.094 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.094 ==> default: -> value=-device, 00:01:34.094 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.094 ==> default: Creating shared folders metadata... 00:01:34.094 ==> default: Starting domain. 00:01:35.995 ==> default: Waiting for domain to get an IP address... 00:01:45.995 ==> default: Waiting for SSH to become available... 00:01:47.896 ==> default: Configuring and enabling network interfaces... 00:01:50.428 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.693 ==> default: Mounting SSHFS shared folder... 00:01:56.266 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:56.266 ==> default: Checking Mount.. 00:01:58.812 ==> default: Checking Mount.. 00:01:58.812 ==> default: Folder Successfully Mounted! 00:01:58.812 ==> default: Running provisioner: file... 00:01:59.071 default: ~/.gitconfig => .gitconfig 00:01:59.071 00:01:59.071 SUCCESS! 00:01:59.071 00:01:59.071 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:59.071 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:59.071 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:59.071 00:01:59.080 [Pipeline] } 00:01:59.099 [Pipeline] // stage 00:01:59.110 [Pipeline] dir 00:01:59.110 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:01:59.112 [Pipeline] { 00:01:59.127 [Pipeline] catchError 00:01:59.129 [Pipeline] { 00:01:59.143 [Pipeline] sh 00:01:59.422 + vagrant ssh-config --host vagrant 00:01:59.422 + sed -ne /^Host/,$p 00:01:59.422 + tee ssh_conf 00:02:02.706 Host vagrant 00:02:02.706 HostName 192.168.121.235 00:02:02.706 User vagrant 00:02:02.706 Port 22 00:02:02.706 UserKnownHostsFile /dev/null 00:02:02.706 StrictHostKeyChecking no 00:02:02.706 PasswordAuthentication no 00:02:02.706 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:02:02.706 IdentitiesOnly yes 00:02:02.706 LogLevel FATAL 00:02:02.706 ForwardAgent yes 00:02:02.706 ForwardX11 yes 00:02:02.706 00:02:02.721 [Pipeline] withEnv 00:02:02.723 [Pipeline] { 00:02:02.739 [Pipeline] sh 00:02:03.015 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.015 source /etc/os-release 00:02:03.015 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.015 # Minimal, systemd-like check. 00:02:03.015 if [[ -e /.dockerenv ]]; then 00:02:03.015 # Clear garbage from the node's name: 00:02:03.015 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.015 # $HOSTNAME is the actual container id 00:02:03.015 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.015 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.015 # We can assume this is a mount from a host where container is running, 00:02:03.015 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.015 container="$(< /etc/hostname) ($agent)" 00:02:03.015 else 00:02:03.015 # Fallback 00:02:03.015 container=$agent 00:02:03.015 fi 00:02:03.015 fi 00:02:03.015 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.015 00:02:03.593 [Pipeline] } 00:02:03.613 [Pipeline] // withEnv 00:02:03.621 [Pipeline] setCustomBuildProperty 00:02:03.636 [Pipeline] stage 00:02:03.639 [Pipeline] { (Tests) 00:02:03.658 [Pipeline] sh 00:02:03.938 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.516 [Pipeline] sh 00:02:04.789 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.370 [Pipeline] timeout 00:02:05.370 Timeout set to expire in 1 hr 30 min 00:02:05.372 [Pipeline] { 00:02:05.391 [Pipeline] sh 00:02:05.674 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.642 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:06.655 [Pipeline] sh 00:02:06.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.501 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:07.514 [Pipeline] sh 00:02:07.791 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.373 [Pipeline] sh 00:02:08.652 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:02:09.219 ++ readlink -f spdk_repo 00:02:09.219 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.219 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.219 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.219 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.219 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.219 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.219 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.219 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:02:09.219 + cd /home/vagrant/spdk_repo 00:02:09.219 + source /etc/os-release 00:02:09.219 ++ NAME=Ubuntu 00:02:09.219 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:02:09.219 ++ ID=ubuntu 00:02:09.219 ++ ID_LIKE=debian 00:02:09.219 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:02:09.219 ++ VERSION_ID=20.04 00:02:09.219 ++ HOME_URL=https://www.ubuntu.com/ 00:02:09.219 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:09.219 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:09.219 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:09.219 ++ VERSION_CODENAME=focal 00:02:09.219 ++ UBUNTU_CODENAME=focal 00:02:09.219 + uname -a 00:02:09.219 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:09.219 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.219 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:09.477 Hugepages 00:02:09.477 node hugesize free / total 00:02:09.477 node0 1048576kB 0 / 0 00:02:09.477 node0 2048kB 0 / 0 00:02:09.477 00:02:09.477 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.477 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.477 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.477 + rm -f /tmp/spdk-ld-path 00:02:09.477 + source autorun-spdk.conf 00:02:09.477 ++ SPDK_TEST_UNITTEST=1 00:02:09.477 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.477 ++ SPDK_TEST_NVME=1 00:02:09.477 ++ SPDK_TEST_BLOCKDEV=1 00:02:09.477 ++ SPDK_RUN_ASAN=1 00:02:09.477 ++ SPDK_RUN_UBSAN=1 00:02:09.477 ++ SPDK_TEST_RAID5=1 00:02:09.477 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.477 ++ RUN_NIGHTLY=1 00:02:09.477 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.477 + [[ -n '' ]] 00:02:09.477 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.477 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:09.477 + for M in /var/spdk/build-*-manifest.txt 00:02:09.477 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.477 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.477 + for M in /var/spdk/build-*-manifest.txt 00:02:09.477 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.477 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.477 ++ uname 00:02:09.477 + [[ Linux == \L\i\n\u\x ]] 00:02:09.477 + sudo dmesg -T 00:02:09.477 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:09.477 + sudo dmesg --clear 00:02:09.477 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:09.477 + dmesg_pid=2352 00:02:09.477 + sudo dmesg -Tw 00:02:09.477 + [[ Ubuntu == FreeBSD ]] 00:02:09.477 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.477 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.477 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.477 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.477 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.477 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.477 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.477 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:09.477 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:09.477 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:09.477 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.477 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.477 Test configuration: 00:02:09.477 SPDK_TEST_UNITTEST=1 00:02:09.477 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.477 SPDK_TEST_NVME=1 00:02:09.477 SPDK_TEST_BLOCKDEV=1 00:02:09.477 SPDK_RUN_ASAN=1 00:02:09.477 SPDK_RUN_UBSAN=1 00:02:09.477 SPDK_TEST_RAID5=1 00:02:09.477 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.734 RUN_NIGHTLY=1 22:10:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.734 22:10:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.734 22:10:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.734 22:10:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.734 22:10:08 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:09.734 22:10:08 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:09.734 22:10:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:09.734 22:10:08 -- paths/export.sh@5 -- $ export PATH 00:02:09.734 22:10:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:09.734 22:10:08 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.734 22:10:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:09.734 22:10:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720995008.XXXXXX 00:02:09.734 22:10:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720995008.L40EnD 00:02:09.734 22:10:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:09.735 22:10:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:09.757 22:10:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.757 22:10:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.757 22:10:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.757 22:10:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:09.757 22:10:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:09.757 22:10:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.757 22:10:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:02:09.757 22:10:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.757 22:10:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.757 22:10:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.757 22:10:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.757 Sun Jul 14 22:10:08 UTC 2024 00:02:09.757 22:10:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.757 LTS-59-g4b94202c6 00:02:09.757 22:10:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:09.757 22:10:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:09.757 22:10:08 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:09.757 22:10:08 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:09.757 22:10:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.757 ************************************ 00:02:09.757 START TEST asan 00:02:09.757 ************************************ 00:02:09.757 using asan 00:02:09.757 22:10:08 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:09.757 00:02:09.757 real 0m0.000s 00:02:09.757 user 0m0.000s 00:02:09.757 sys 0m0.000s 00:02:09.757 22:10:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.757 ************************************ 00:02:09.757 END TEST asan 00:02:09.757 ************************************ 00:02:09.757 22:10:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.757 22:10:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.757 22:10:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.757 22:10:08 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:09.757 22:10:08 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:09.757 22:10:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.757 ************************************ 00:02:09.757 START TEST ubsan 00:02:09.757 ************************************ 00:02:09.757 22:10:08 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:09.757 using ubsan 00:02:09.757 00:02:09.757 real 0m0.000s 00:02:09.757 user 0m0.000s 00:02:09.757 sys 0m0.000s 00:02:09.757 22:10:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.757 22:10:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.757 ************************************ 00:02:09.757 END TEST ubsan 00:02:09.757 ************************************ 00:02:09.757 22:10:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.757 22:10:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.757 22:10:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.757 22:10:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.757 22:10:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.757 22:10:08 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:09.757 22:10:08 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:09.757 22:10:08 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:09.757 22:10:08 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:09.757 22:10:08 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:09.757 22:10:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.757 ************************************ 00:02:09.757 START TEST unittest_build 00:02:09.757 ************************************ 00:02:09.757 22:10:08 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:02:09.757 22:10:08 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:02:09.757 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.757 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:10.324 Using 'verbs' RDMA provider 00:02:25.456 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:37.649 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:37.649 Creating mk/config.mk...done. 00:02:37.649 Creating mk/cc.flags.mk...done. 00:02:37.649 Type 'make' to build. 00:02:37.649 22:10:36 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:37.649 make[1]: Nothing to be done for 'all'. 00:02:39.552 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.810 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.810 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.810 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.810 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.328 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.586 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.586 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.586 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.586 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.586 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.845 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.103 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.361 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.361 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.361 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.361 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.361 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.361 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.362 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.362 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.362 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.620 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.879 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.879 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.879 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.879 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.138 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.138 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.138 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.138 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.396 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.396 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.396 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.396 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.396 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.655 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.655 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.655 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.655 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.655 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.655 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.913 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.172 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.172 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.172 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.172 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.172 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.172 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.430 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.430 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.430 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.430 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.430 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.430 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.689 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.689 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.689 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.689 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.689 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.948 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.948 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.948 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.207 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.207 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.207 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.207 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.207 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.722 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.980 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.239 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.239 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.239 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.239 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.498 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.498 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.498 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.015 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.274 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.791 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.791 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.308 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.566 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.566 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.566 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.566 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.566 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.825 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.825 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.083 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.342 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.342 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.342 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.342 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.342 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.342 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.601 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.860 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.118 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.376 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.376 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.376 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.376 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.376 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.376 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.634 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.634 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.634 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.040 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.040 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.297 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.297 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.555 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.555 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.555 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.813 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.813 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.813 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.813 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.069 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.069 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.069 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.069 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.327 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.327 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.327 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.327 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.585 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.585 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.585 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.843 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.843 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.843 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.100 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.100 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.100 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.100 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.357 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.357 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.357 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.357 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.614 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.614 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.614 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.614 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.873 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.873 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.873 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.873 The Meson build system 00:02:53.873 Version: 1.4.0 00:02:53.873 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:53.873 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:53.873 Build type: native build 00:02:53.873 Program cat found: YES (/usr/bin/cat) 00:02:53.873 Project name: DPDK 00:02:53.873 Project version: 23.11.0 00:02:53.873 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:53.873 C linker for the host machine: cc ld.bfd 2.34 00:02:53.873 Host machine cpu family: x86_64 00:02:53.873 Host machine cpu: x86_64 00:02:53.873 Message: ## Building in Developer Mode ## 00:02:53.873 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.873 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:53.873 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.873 Program python3 found: YES (/usr/bin/python3) 00:02:53.873 Program cat found: YES (/usr/bin/cat) 00:02:53.873 Compiler for C supports arguments -march=native: YES 00:02:53.873 Checking for size of "void *" : 8 00:02:53.873 Checking for size of "void *" : 8 (cached) 00:02:53.873 Library m found: YES 00:02:53.873 Library numa found: YES 00:02:53.873 Has header "numaif.h" : YES 00:02:53.873 Library fdt found: NO 00:02:53.873 Library execinfo found: NO 00:02:53.873 Has header "execinfo.h" : YES 00:02:53.873 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:53.873 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.873 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.873 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.873 Run-time dependency openssl found: YES 1.1.1f 00:02:53.873 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:53.873 Library pcap found: NO 00:02:53.873 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.873 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.873 Compiler for C supports arguments -Wformat: YES 00:02:53.873 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:53.873 Compiler for C supports arguments -Wformat-security: YES 00:02:53.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.873 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.873 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.873 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.873 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.873 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.873 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.873 Compiler for C supports arguments -Wundef: YES 00:02:53.873 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.873 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.873 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.873 Program objdump found: YES (/usr/bin/objdump) 00:02:53.873 Compiler for C supports arguments -mavx512f: YES 00:02:53.873 Checking if "AVX512 checking" compiles: YES 00:02:53.873 Fetching value of define "__SSE4_2__" : 1 00:02:53.873 Fetching value of define "__AES__" : 1 00:02:53.873 Fetching value of define "__AVX__" : 1 00:02:53.873 Fetching value of define "__AVX2__" : 1 00:02:53.873 Fetching value of define "__AVX512BW__" : (undefined) 00:02:53.873 Fetching value of define "__AVX512CD__" : (undefined) 00:02:53.873 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:53.873 Fetching value of define "__AVX512F__" : (undefined) 00:02:53.873 Fetching value of define "__AVX512VL__" : (undefined) 00:02:53.873 Fetching value of define "__PCLMUL__" : 1 00:02:53.873 Fetching value of define "__RDRND__" : 1 00:02:53.873 Fetching value of define "__RDSEED__" : 1 00:02:53.873 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.873 Fetching value of define "__znver1__" : (undefined) 00:02:53.873 Fetching value of define "__znver2__" : (undefined) 00:02:53.873 Fetching value of define "__znver3__" : (undefined) 00:02:53.873 Fetching value of define "__znver4__" : (undefined) 00:02:53.874 Library asan found: YES 00:02:53.874 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.874 Message: lib/log: Defining dependency "log" 00:02:53.874 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.874 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.874 Library rt found: YES 00:02:53.874 Checking for function "getentropy" : NO 00:02:53.874 Message: lib/eal: Defining dependency "eal" 00:02:53.874 Message: lib/ring: Defining dependency "ring" 00:02:53.874 Message: lib/rcu: Defining dependency "rcu" 00:02:53.874 Message: lib/mempool: Defining dependency "mempool" 00:02:53.874 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.874 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.874 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.874 Compiler for C supports arguments -mpclmul: YES 00:02:53.874 Compiler for C supports arguments -maes: YES 00:02:53.874 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.874 Compiler for C supports arguments -mavx512bw: YES 00:02:53.874 Compiler for C supports arguments -mavx512dq: YES 00:02:53.874 Compiler for C supports arguments -mavx512vl: YES 00:02:53.874 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.874 Compiler for C supports arguments -mavx2: YES 00:02:53.874 Compiler for C supports arguments -mavx: YES 00:02:53.874 Message: lib/net: Defining dependency "net" 00:02:53.874 Message: lib/meter: Defining dependency "meter" 00:02:53.874 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.874 Message: lib/pci: Defining dependency "pci" 00:02:53.874 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.874 Message: lib/hash: Defining dependency "hash" 00:02:53.874 Message: lib/timer: Defining dependency "timer" 00:02:53.874 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.874 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.874 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.874 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.874 Message: lib/power: Defining dependency "power" 00:02:53.874 Message: lib/reorder: Defining dependency "reorder" 00:02:53.874 Message: lib/security: Defining dependency "security" 00:02:53.874 Has header "linux/userfaultfd.h" : YES 00:02:53.874 Has header "linux/vduse.h" : NO 00:02:53.874 Message: lib/vhost: Defining dependency "vhost" 00:02:53.874 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.874 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.874 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:53.874 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:53.874 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:53.874 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:53.874 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:53.874 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:53.874 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:53.874 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:53.874 Program doxygen found: YES (/usr/bin/doxygen) 00:02:53.874 Configuring doxy-api-html.conf using configuration 00:02:53.874 Configuring doxy-api-man.conf using configuration 00:02:53.874 Program mandb found: YES (/usr/bin/mandb) 00:02:53.874 Program sphinx-build found: NO 00:02:53.874 Configuring rte_build_config.h using configuration 00:02:53.874 Message: 00:02:53.874 ================= 00:02:53.874 Applications Enabled 00:02:53.874 ================= 00:02:53.874 00:02:53.874 apps: 00:02:53.874 00:02:53.874 00:02:53.874 Message: 00:02:53.874 ================= 00:02:53.874 Libraries Enabled 00:02:53.874 ================= 00:02:53.874 00:02:53.874 libs: 00:02:53.874 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:53.874 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:53.874 cryptodev, dmadev, power, reorder, security, vhost, 00:02:53.874 00:02:53.874 Message: 00:02:53.874 =============== 00:02:53.874 Drivers Enabled 00:02:53.874 =============== 00:02:53.874 00:02:53.874 common: 00:02:53.874 00:02:53.874 bus: 00:02:53.874 pci, vdev, 00:02:53.874 mempool: 00:02:53.874 ring, 00:02:53.874 dma: 00:02:53.874 00:02:53.874 net: 00:02:53.874 00:02:53.874 crypto: 00:02:53.874 00:02:53.874 compress: 00:02:53.874 00:02:53.874 vdpa: 00:02:53.874 00:02:53.874 00:02:53.874 Message: 00:02:53.874 ================= 00:02:53.874 Content Skipped 00:02:53.874 ================= 00:02:53.874 00:02:53.874 apps: 00:02:53.874 dumpcap: explicitly disabled via build config 00:02:53.874 graph: explicitly disabled via build config 00:02:53.874 pdump: explicitly disabled via build config 00:02:53.874 proc-info: explicitly disabled via build config 00:02:53.874 test-acl: explicitly disabled via build config 00:02:53.874 test-bbdev: explicitly disabled via build config 00:02:53.874 test-cmdline: explicitly disabled via build config 00:02:53.874 test-compress-perf: explicitly disabled via build config 00:02:53.874 test-crypto-perf: explicitly disabled via build config 00:02:53.874 test-dma-perf: explicitly disabled via build config 00:02:53.874 test-eventdev: explicitly disabled via build config 00:02:53.874 test-fib: explicitly disabled via build config 00:02:53.874 test-flow-perf: explicitly disabled via build config 00:02:53.874 test-gpudev: explicitly disabled via build config 00:02:53.874 test-mldev: explicitly disabled via build config 00:02:53.874 test-pipeline: explicitly disabled via build config 00:02:53.874 test-pmd: explicitly disabled via build config 00:02:53.874 test-regex: explicitly disabled via build config 00:02:53.874 test-sad: explicitly disabled via build config 00:02:53.874 test-security-perf: explicitly disabled via build config 00:02:53.874 00:02:53.874 libs: 00:02:53.874 metrics: explicitly disabled via build config 00:02:53.874 acl: explicitly disabled via build config 00:02:53.874 bbdev: explicitly disabled via build config 00:02:53.874 bitratestats: explicitly disabled via build config 00:02:53.874 bpf: explicitly disabled via build config 00:02:53.874 cfgfile: explicitly disabled via build config 00:02:53.874 distributor: explicitly disabled via build config 00:02:53.874 efd: explicitly disabled via build config 00:02:53.874 eventdev: explicitly disabled via build config 00:02:53.874 dispatcher: explicitly disabled via build config 00:02:53.874 gpudev: explicitly disabled via build config 00:02:53.874 gro: explicitly disabled via build config 00:02:53.874 gso: explicitly disabled via build config 00:02:53.874 ip_frag: explicitly disabled via build config 00:02:53.874 jobstats: explicitly disabled via build config 00:02:53.874 latencystats: explicitly disabled via build config 00:02:53.874 lpm: explicitly disabled via build config 00:02:53.874 member: explicitly disabled via build config 00:02:53.874 pcapng: explicitly disabled via build config 00:02:53.874 rawdev: explicitly disabled via build config 00:02:53.874 regexdev: explicitly disabled via build config 00:02:53.874 mldev: explicitly disabled via build config 00:02:53.874 rib: explicitly disabled via build config 00:02:53.874 sched: explicitly disabled via build config 00:02:53.874 stack: explicitly disabled via build config 00:02:53.874 ipsec: explicitly disabled via build config 00:02:53.874 pdcp: explicitly disabled via build config 00:02:53.874 fib: explicitly disabled via build config 00:02:53.874 port: explicitly disabled via build config 00:02:53.874 pdump: explicitly disabled via build config 00:02:53.874 table: explicitly disabled via build config 00:02:53.874 pipeline: explicitly disabled via build config 00:02:53.874 graph: explicitly disabled via build config 00:02:53.874 node: explicitly disabled via build config 00:02:53.874 00:02:53.874 drivers: 00:02:53.874 common/cpt: not in enabled drivers build config 00:02:53.874 common/dpaax: not in enabled drivers build config 00:02:53.874 common/iavf: not in enabled drivers build config 00:02:53.874 common/idpf: not in enabled drivers build config 00:02:53.874 common/mvep: not in enabled drivers build config 00:02:53.874 common/octeontx: not in enabled drivers build config 00:02:53.874 bus/auxiliary: not in enabled drivers build config 00:02:53.874 bus/cdx: not in enabled drivers build config 00:02:53.874 bus/dpaa: not in enabled drivers build config 00:02:53.874 bus/fslmc: not in enabled drivers build config 00:02:53.874 bus/ifpga: not in enabled drivers build config 00:02:53.874 bus/platform: not in enabled drivers build config 00:02:53.874 bus/vmbus: not in enabled drivers build config 00:02:53.874 common/cnxk: not in enabled drivers build config 00:02:53.874 common/mlx5: not in enabled drivers build config 00:02:53.874 common/nfp: not in enabled drivers build config 00:02:53.874 common/qat: not in enabled drivers build config 00:02:53.874 common/sfc_efx: not in enabled drivers build config 00:02:53.874 mempool/bucket: not in enabled drivers build config 00:02:53.874 mempool/cnxk: not in enabled drivers build config 00:02:53.874 mempool/dpaa: not in enabled drivers build config 00:02:53.874 mempool/dpaa2: not in enabled drivers build config 00:02:53.874 mempool/octeontx: not in enabled drivers build config 00:02:53.874 mempool/stack: not in enabled drivers build config 00:02:53.874 dma/cnxk: not in enabled drivers build config 00:02:53.874 dma/dpaa: not in enabled drivers build config 00:02:53.874 dma/dpaa2: not in enabled drivers build config 00:02:53.874 dma/hisilicon: not in enabled drivers build config 00:02:53.874 dma/idxd: not in enabled drivers build config 00:02:53.874 dma/ioat: not in enabled drivers build config 00:02:53.874 dma/skeleton: not in enabled drivers build config 00:02:53.874 net/af_packet: not in enabled drivers build config 00:02:53.874 net/af_xdp: not in enabled drivers build config 00:02:53.874 net/ark: not in enabled drivers build config 00:02:53.874 net/atlantic: not in enabled drivers build config 00:02:53.874 net/avp: not in enabled drivers build config 00:02:53.874 net/axgbe: not in enabled drivers build config 00:02:53.874 net/bnx2x: not in enabled drivers build config 00:02:53.874 net/bnxt: not in enabled drivers build config 00:02:53.874 net/bonding: not in enabled drivers build config 00:02:53.874 net/cnxk: not in enabled drivers build config 00:02:53.874 net/cpfl: not in enabled drivers build config 00:02:53.874 net/cxgbe: not in enabled drivers build config 00:02:53.874 net/dpaa: not in enabled drivers build config 00:02:53.874 net/dpaa2: not in enabled drivers build config 00:02:53.874 net/e1000: not in enabled drivers build config 00:02:53.874 net/ena: not in enabled drivers build config 00:02:53.874 net/enetc: not in enabled drivers build config 00:02:53.874 net/enetfec: not in enabled drivers build config 00:02:53.874 net/enic: not in enabled drivers build config 00:02:53.875 net/failsafe: not in enabled drivers build config 00:02:53.875 net/fm10k: not in enabled drivers build config 00:02:53.875 net/gve: not in enabled drivers build config 00:02:53.875 net/hinic: not in enabled drivers build config 00:02:53.875 net/hns3: not in enabled drivers build config 00:02:53.875 net/i40e: not in enabled drivers build config 00:02:53.875 net/iavf: not in enabled drivers build config 00:02:53.875 net/ice: not in enabled drivers build config 00:02:53.875 net/idpf: not in enabled drivers build config 00:02:53.875 net/igc: not in enabled drivers build config 00:02:53.875 net/ionic: not in enabled drivers build config 00:02:53.875 net/ipn3ke: not in enabled drivers build config 00:02:53.875 net/ixgbe: not in enabled drivers build config 00:02:53.875 net/mana: not in enabled drivers build config 00:02:53.875 net/memif: not in enabled drivers build config 00:02:53.875 net/mlx4: not in enabled drivers build config 00:02:53.875 net/mlx5: not in enabled drivers build config 00:02:53.875 net/mvneta: not in enabled drivers build config 00:02:53.875 net/mvpp2: not in enabled drivers build config 00:02:53.875 net/netvsc: not in enabled drivers build config 00:02:53.875 net/nfb: not in enabled drivers build config 00:02:53.875 net/nfp: not in enabled drivers build config 00:02:53.875 net/ngbe: not in enabled drivers build config 00:02:53.875 net/null: not in enabled drivers build config 00:02:53.875 net/octeontx: not in enabled drivers build config 00:02:53.875 net/octeon_ep: not in enabled drivers build config 00:02:53.875 net/pcap: not in enabled drivers build config 00:02:53.875 net/pfe: not in enabled drivers build config 00:02:53.875 net/qede: not in enabled drivers build config 00:02:53.875 net/ring: not in enabled drivers build config 00:02:53.875 net/sfc: not in enabled drivers build config 00:02:53.875 net/softnic: not in enabled drivers build config 00:02:53.875 net/tap: not in enabled drivers build config 00:02:53.875 net/thunderx: not in enabled drivers build config 00:02:53.875 net/txgbe: not in enabled drivers build config 00:02:53.875 net/vdev_netvsc: not in enabled drivers build config 00:02:53.875 net/vhost: not in enabled drivers build config 00:02:53.875 net/virtio: not in enabled drivers build config 00:02:53.875 net/vmxnet3: not in enabled drivers build config 00:02:53.875 raw/*: missing internal dependency, "rawdev" 00:02:53.875 crypto/armv8: not in enabled drivers build config 00:02:53.875 crypto/bcmfs: not in enabled drivers build config 00:02:53.875 crypto/caam_jr: not in enabled drivers build config 00:02:53.875 crypto/ccp: not in enabled drivers build config 00:02:53.875 crypto/cnxk: not in enabled drivers build config 00:02:53.875 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.875 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.875 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.875 crypto/mlx5: not in enabled drivers build config 00:02:53.875 crypto/mvsam: not in enabled drivers build config 00:02:53.875 crypto/nitrox: not in enabled drivers build config 00:02:53.875 crypto/null: not in enabled drivers build config 00:02:53.875 crypto/octeontx: not in enabled drivers build config 00:02:53.875 crypto/openssl: not in enabled drivers build config 00:02:53.875 crypto/scheduler: not in enabled drivers build config 00:02:53.875 crypto/uadk: not in enabled drivers build config 00:02:53.875 crypto/virtio: not in enabled drivers build config 00:02:53.875 compress/isal: not in enabled drivers build config 00:02:53.875 compress/mlx5: not in enabled drivers build config 00:02:53.875 compress/octeontx: not in enabled drivers build config 00:02:53.875 compress/zlib: not in enabled drivers build config 00:02:53.875 regex/*: missing internal dependency, "regexdev" 00:02:53.875 ml/*: missing internal dependency, "mldev" 00:02:53.875 vdpa/ifc: not in enabled drivers build config 00:02:53.875 vdpa/mlx5: not in enabled drivers build config 00:02:53.875 vdpa/nfp: not in enabled drivers build config 00:02:53.875 vdpa/sfc: not in enabled drivers build config 00:02:53.875 event/*: missing internal dependency, "eventdev" 00:02:53.875 baseband/*: missing internal dependency, "bbdev" 00:02:53.875 gpu/*: missing internal dependency, "gpudev" 00:02:53.875 00:02:53.875 00:02:53.875 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.389 Build targets in project: 85 00:02:54.389 00:02:54.389 DPDK 23.11.0 00:02:54.389 00:02:54.389 User defined options 00:02:54.389 buildtype : debug 00:02:54.389 default_library : static 00:02:54.389 libdir : lib 00:02:54.389 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:54.389 b_sanitize : address 00:02:54.389 c_args : -fPIC -Werror 00:02:54.389 c_link_args : 00:02:54.389 cpu_instruction_set: native 00:02:54.389 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:54.389 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:54.389 enable_docs : false 00:02:54.390 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:54.390 enable_kmods : false 00:02:54.390 tests : false 00:02:54.390 00:02:54.390 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.955 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:54.955 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:54.955 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:54.955 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:54.955 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.955 [5/264] Linking static target lib/librte_kvargs.a 00:02:54.955 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:54.955 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:54.955 [8/264] Linking static target lib/librte_log.a 00:02:54.955 [9/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:54.955 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.213 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.213 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.213 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.213 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.213 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.471 [16/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.471 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.471 [18/264] Linking static target lib/librte_telemetry.a 00:02:55.471 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:55.471 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:55.471 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:55.471 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:55.471 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:55.471 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:55.471 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:55.735 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:55.735 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:55.735 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:55.735 [29/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.735 [30/264] Linking target lib/librte_log.so.24.0 00:02:55.735 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.735 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.012 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:56.012 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:56.012 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.012 [36/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:56.012 [37/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.012 [38/264] Linking target lib/librte_kvargs.so.24.0 00:02:56.012 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:56.012 [40/264] Linking target lib/librte_telemetry.so.24.0 00:02:56.012 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:56.012 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:56.012 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:56.012 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:56.012 [45/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:56.012 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:56.012 [47/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:56.012 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:56.271 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:56.271 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:56.271 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.271 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.271 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:56.271 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.271 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.271 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.529 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.529 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.529 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.529 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.529 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.529 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.529 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.529 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.529 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.529 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.529 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.787 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.787 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.787 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:56.787 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:56.787 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.787 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.787 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.787 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.787 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.787 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.787 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.046 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.046 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.046 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.046 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.046 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.046 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.304 [85/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:57.304 [86/264] Linking static target lib/librte_ring.a 00:02:57.304 [87/264] Linking static target lib/librte_eal.a 00:02:57.304 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.304 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.304 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.304 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.304 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.304 [93/264] Linking static target lib/librte_mempool.a 00:02:57.562 [94/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.562 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.562 [96/264] Linking static target lib/librte_rcu.a 00:02:57.562 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.562 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:57.821 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.821 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.821 [101/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.821 [102/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.821 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.821 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.821 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.079 [106/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.079 [107/264] Linking static target lib/librte_net.a 00:02:58.079 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.079 [109/264] Linking static target lib/librte_meter.a 00:02:58.079 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.079 [111/264] Linking static target lib/librte_mbuf.a 00:02:58.079 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.079 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.338 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.338 [115/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.338 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.338 [117/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.338 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.597 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.597 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.855 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.855 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.855 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.855 [124/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.855 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.855 [126/264] Linking static target lib/librte_pci.a 00:02:59.114 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.114 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.114 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.114 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.114 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.114 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.114 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.114 [134/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.114 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.114 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.114 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.372 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.372 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.372 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.372 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.372 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.372 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.372 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.631 [145/264] Linking static target lib/librte_cmdline.a 00:02:59.631 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:59.631 [147/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.631 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:59.889 [149/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.889 [150/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:59.889 [151/264] Linking static target lib/librte_timer.a 00:02:59.889 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.146 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.146 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.146 [155/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.146 [156/264] Linking static target lib/librte_compressdev.a 00:03:00.146 [157/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.146 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.146 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.403 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.403 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.403 [162/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.403 [163/264] Linking static target lib/librte_dmadev.a 00:03:00.403 [164/264] Linking static target lib/librte_hash.a 00:03:00.403 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:00.403 [166/264] Linking static target lib/librte_ethdev.a 00:03:00.403 [167/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.660 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:00.660 [169/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.660 [170/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.660 [171/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.660 [172/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.660 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.918 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.918 [175/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.918 [176/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.918 [177/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.918 [178/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.918 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.175 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.175 [181/264] Linking static target lib/librte_power.a 00:03:01.175 [182/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.175 [183/264] Linking static target lib/librte_cryptodev.a 00:03:01.434 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.434 [185/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.434 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.434 [187/264] Linking static target lib/librte_reorder.a 00:03:01.434 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.691 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.691 [190/264] Linking static target lib/librte_security.a 00:03:01.691 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.949 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.949 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.949 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.949 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.206 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.206 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.206 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.207 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.207 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.464 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.464 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.721 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.721 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.721 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.721 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.721 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.721 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.721 [209/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.721 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.721 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.721 [212/264] Linking static target drivers/librte_bus_vdev.a 00:03:02.721 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.721 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.721 [215/264] Linking static target drivers/librte_bus_pci.a 00:03:02.979 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.979 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.979 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.979 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.237 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.237 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.237 [222/264] Linking static target drivers/librte_mempool_ring.a 00:03:03.237 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.609 [224/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.609 [225/264] Linking target lib/librte_eal.so.24.0 00:03:04.609 [226/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:04.609 [227/264] Linking target lib/librte_meter.so.24.0 00:03:04.609 [228/264] Linking target lib/librte_pci.so.24.0 00:03:04.609 [229/264] Linking target lib/librte_ring.so.24.0 00:03:04.609 [230/264] Linking target lib/librte_dmadev.so.24.0 00:03:04.609 [231/264] Linking target lib/librte_timer.so.24.0 00:03:04.609 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:03:04.867 [233/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:04.867 [234/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:04.867 [235/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:04.867 [236/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:04.867 [237/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:04.868 [238/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.868 [239/264] Linking target lib/librte_rcu.so.24.0 00:03:04.868 [240/264] Linking target lib/librte_mempool.so.24.0 00:03:04.868 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:03:04.868 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:04.868 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:05.149 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:03:05.149 [245/264] Linking target lib/librte_mbuf.so.24.0 00:03:05.149 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:05.149 [247/264] Linking target lib/librte_net.so.24.0 00:03:05.149 [248/264] Linking target lib/librte_compressdev.so.24.0 00:03:05.149 [249/264] Linking target lib/librte_reorder.so.24.0 00:03:05.149 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:03:05.407 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:05.407 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:05.407 [253/264] Linking target lib/librte_cmdline.so.24.0 00:03:05.407 [254/264] Linking target lib/librte_hash.so.24.0 00:03:05.408 [255/264] Linking target lib/librte_security.so.24.0 00:03:05.408 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:05.974 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.232 [258/264] Linking target lib/librte_ethdev.so.24.0 00:03:06.232 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:06.232 [260/264] Linking target lib/librte_power.so.24.0 00:03:08.762 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.762 [262/264] Linking static target lib/librte_vhost.a 00:03:10.137 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.137 [264/264] Linking target lib/librte_vhost.so.24.0 00:03:10.137 INFO: autodetecting backend as ninja 00:03:10.137 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.073 CC lib/ut/ut.o 00:03:11.073 CC lib/log/log.o 00:03:11.073 CC lib/log/log_deprecated.o 00:03:11.073 CC lib/log/log_flags.o 00:03:11.073 CC lib/ut_mock/mock.o 00:03:11.331 LIB libspdk_ut_mock.a 00:03:11.331 LIB libspdk_ut.a 00:03:11.331 LIB libspdk_log.a 00:03:11.331 CC lib/dma/dma.o 00:03:11.331 CC lib/ioat/ioat.o 00:03:11.331 CXX lib/trace_parser/trace.o 00:03:11.331 CC lib/util/base64.o 00:03:11.331 CC lib/util/bit_array.o 00:03:11.331 CC lib/util/cpuset.o 00:03:11.331 CC lib/util/crc16.o 00:03:11.331 CC lib/util/crc32.o 00:03:11.331 CC lib/util/crc32c.o 00:03:11.589 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.589 CC lib/vfio_user/host/vfio_user.o 00:03:11.589 CC lib/util/crc32_ieee.o 00:03:11.589 CC lib/util/crc64.o 00:03:11.589 CC lib/util/dif.o 00:03:11.589 LIB libspdk_dma.a 00:03:11.589 CC lib/util/fd.o 00:03:11.589 CC lib/util/file.o 00:03:11.589 CC lib/util/hexlify.o 00:03:11.589 CC lib/util/iov.o 00:03:11.860 CC lib/util/math.o 00:03:11.860 CC lib/util/pipe.o 00:03:11.860 LIB libspdk_ioat.a 00:03:11.860 LIB libspdk_vfio_user.a 00:03:11.860 CC lib/util/strerror_tls.o 00:03:11.860 CC lib/util/string.o 00:03:11.860 CC lib/util/uuid.o 00:03:11.860 CC lib/util/fd_group.o 00:03:11.860 CC lib/util/xor.o 00:03:11.860 CC lib/util/zipf.o 00:03:12.458 LIB libspdk_util.a 00:03:12.458 CC lib/conf/conf.o 00:03:12.458 CC lib/json/json_parse.o 00:03:12.458 CC lib/json/json_util.o 00:03:12.458 CC lib/vmd/led.o 00:03:12.458 CC lib/vmd/vmd.o 00:03:12.458 CC lib/json/json_write.o 00:03:12.458 CC lib/rdma/common.o 00:03:12.458 CC lib/idxd/idxd.o 00:03:12.458 CC lib/env_dpdk/env.o 00:03:12.716 LIB libspdk_trace_parser.a 00:03:12.716 CC lib/idxd/idxd_user.o 00:03:12.716 CC lib/env_dpdk/memory.o 00:03:12.716 CC lib/rdma/rdma_verbs.o 00:03:12.716 LIB libspdk_conf.a 00:03:12.716 CC lib/env_dpdk/pci.o 00:03:12.716 CC lib/env_dpdk/init.o 00:03:12.716 LIB libspdk_json.a 00:03:12.716 CC lib/env_dpdk/threads.o 00:03:12.974 CC lib/env_dpdk/pci_ioat.o 00:03:12.974 CC lib/env_dpdk/pci_virtio.o 00:03:12.974 LIB libspdk_rdma.a 00:03:12.974 CC lib/env_dpdk/pci_vmd.o 00:03:12.974 CC lib/env_dpdk/pci_idxd.o 00:03:12.974 CC lib/env_dpdk/pci_event.o 00:03:12.974 CC lib/env_dpdk/sigbus_handler.o 00:03:13.231 CC lib/env_dpdk/pci_dpdk.o 00:03:13.231 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.231 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.231 LIB libspdk_idxd.a 00:03:13.231 LIB libspdk_vmd.a 00:03:13.231 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.231 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.231 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:13.231 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.489 LIB libspdk_jsonrpc.a 00:03:13.748 CC lib/rpc/rpc.o 00:03:14.007 LIB libspdk_rpc.a 00:03:14.007 CC lib/trace/trace_flags.o 00:03:14.007 CC lib/trace/trace.o 00:03:14.007 CC lib/trace/trace_rpc.o 00:03:14.007 CC lib/sock/sock.o 00:03:14.007 CC lib/sock/sock_rpc.o 00:03:14.007 CC lib/notify/notify.o 00:03:14.007 CC lib/notify/notify_rpc.o 00:03:14.266 LIB libspdk_env_dpdk.a 00:03:14.266 LIB libspdk_notify.a 00:03:14.266 LIB libspdk_trace.a 00:03:14.523 CC lib/thread/thread.o 00:03:14.523 CC lib/thread/iobuf.o 00:03:14.523 LIB libspdk_sock.a 00:03:14.779 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:14.779 CC lib/nvme/nvme_ctrlr.o 00:03:14.779 CC lib/nvme/nvme_ns.o 00:03:14.779 CC lib/nvme/nvme_fabric.o 00:03:14.779 CC lib/nvme/nvme_ns_cmd.o 00:03:14.780 CC lib/nvme/nvme_pcie_common.o 00:03:14.780 CC lib/nvme/nvme_pcie.o 00:03:14.780 CC lib/nvme/nvme_qpair.o 00:03:14.780 CC lib/nvme/nvme.o 00:03:15.345 CC lib/nvme/nvme_quirks.o 00:03:15.345 CC lib/nvme/nvme_transport.o 00:03:15.345 CC lib/nvme/nvme_discovery.o 00:03:15.603 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.603 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.603 CC lib/nvme/nvme_tcp.o 00:03:15.603 CC lib/nvme/nvme_opal.o 00:03:15.603 CC lib/nvme/nvme_io_msg.o 00:03:15.603 CC lib/nvme/nvme_poll_group.o 00:03:15.861 CC lib/nvme/nvme_zns.o 00:03:15.861 CC lib/nvme/nvme_cuse.o 00:03:15.861 CC lib/nvme/nvme_vfio_user.o 00:03:15.861 CC lib/nvme/nvme_rdma.o 00:03:16.426 LIB libspdk_thread.a 00:03:16.426 CC lib/virtio/virtio.o 00:03:16.426 CC lib/virtio/virtio_vhost_user.o 00:03:16.426 CC lib/virtio/virtio_vfio_user.o 00:03:16.426 CC lib/blob/blobstore.o 00:03:16.426 CC lib/accel/accel.o 00:03:16.426 CC lib/init/json_config.o 00:03:16.684 CC lib/init/subsystem.o 00:03:16.684 CC lib/init/subsystem_rpc.o 00:03:16.684 CC lib/accel/accel_rpc.o 00:03:16.684 CC lib/accel/accel_sw.o 00:03:16.684 CC lib/virtio/virtio_pci.o 00:03:16.684 CC lib/init/rpc.o 00:03:16.941 CC lib/blob/request.o 00:03:16.941 LIB libspdk_init.a 00:03:16.941 CC lib/blob/zeroes.o 00:03:16.941 CC lib/blob/blob_bs_dev.o 00:03:17.199 LIB libspdk_virtio.a 00:03:17.199 CC lib/event/reactor.o 00:03:17.199 CC lib/event/app.o 00:03:17.199 CC lib/event/log_rpc.o 00:03:17.199 CC lib/event/app_rpc.o 00:03:17.199 CC lib/event/scheduler_static.o 00:03:17.457 LIB libspdk_nvme.a 00:03:17.715 LIB libspdk_event.a 00:03:17.715 LIB libspdk_accel.a 00:03:17.715 CC lib/bdev/bdev.o 00:03:17.715 CC lib/bdev/bdev_rpc.o 00:03:17.715 CC lib/bdev/bdev_zone.o 00:03:17.715 CC lib/bdev/part.o 00:03:17.715 CC lib/bdev/scsi_nvme.o 00:03:20.336 LIB libspdk_blob.a 00:03:20.336 CC lib/blobfs/blobfs.o 00:03:20.336 CC lib/blobfs/tree.o 00:03:20.336 CC lib/lvol/lvol.o 00:03:21.276 LIB libspdk_bdev.a 00:03:21.276 CC lib/nvmf/ctrlr.o 00:03:21.276 CC lib/nvmf/ctrlr_discovery.o 00:03:21.276 CC lib/nvmf/nvmf.o 00:03:21.276 CC lib/nvmf/subsystem.o 00:03:21.276 CC lib/scsi/dev.o 00:03:21.276 CC lib/nvmf/ctrlr_bdev.o 00:03:21.276 CC lib/nbd/nbd.o 00:03:21.276 CC lib/ftl/ftl_core.o 00:03:21.276 LIB libspdk_blobfs.a 00:03:21.276 LIB libspdk_lvol.a 00:03:21.276 CC lib/ftl/ftl_init.o 00:03:21.276 CC lib/ftl/ftl_layout.o 00:03:21.534 CC lib/scsi/lun.o 00:03:21.534 CC lib/scsi/port.o 00:03:21.534 CC lib/scsi/scsi.o 00:03:21.793 CC lib/ftl/ftl_debug.o 00:03:21.793 CC lib/ftl/ftl_io.o 00:03:21.793 CC lib/nbd/nbd_rpc.o 00:03:21.793 CC lib/ftl/ftl_sb.o 00:03:21.793 CC lib/scsi/scsi_bdev.o 00:03:21.793 CC lib/nvmf/nvmf_rpc.o 00:03:22.051 LIB libspdk_nbd.a 00:03:22.051 CC lib/ftl/ftl_l2p.o 00:03:22.051 CC lib/scsi/scsi_pr.o 00:03:22.051 CC lib/scsi/scsi_rpc.o 00:03:22.051 CC lib/scsi/task.o 00:03:22.051 CC lib/nvmf/transport.o 00:03:22.051 CC lib/nvmf/tcp.o 00:03:22.051 CC lib/ftl/ftl_l2p_flat.o 00:03:22.310 CC lib/nvmf/rdma.o 00:03:22.310 CC lib/ftl/ftl_nv_cache.o 00:03:22.310 CC lib/ftl/ftl_band.o 00:03:22.310 LIB libspdk_scsi.a 00:03:22.310 CC lib/ftl/ftl_band_ops.o 00:03:22.310 CC lib/ftl/ftl_writer.o 00:03:22.569 CC lib/ftl/ftl_rq.o 00:03:22.827 CC lib/ftl/ftl_reloc.o 00:03:22.827 CC lib/ftl/ftl_l2p_cache.o 00:03:22.827 CC lib/ftl/ftl_p2l.o 00:03:22.827 CC lib/ftl/mngt/ftl_mngt.o 00:03:22.827 CC lib/iscsi/conn.o 00:03:23.085 CC lib/vhost/vhost.o 00:03:23.085 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.085 CC lib/iscsi/init_grp.o 00:03:23.085 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.341 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.341 CC lib/vhost/vhost_rpc.o 00:03:23.341 CC lib/iscsi/iscsi.o 00:03:23.341 CC lib/iscsi/md5.o 00:03:23.341 CC lib/iscsi/param.o 00:03:23.598 CC lib/vhost/vhost_scsi.o 00:03:23.598 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.598 CC lib/iscsi/portal_grp.o 00:03:23.598 CC lib/iscsi/tgt_node.o 00:03:23.598 CC lib/iscsi/iscsi_subsystem.o 00:03:23.855 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.855 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.855 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.855 CC lib/vhost/vhost_blk.o 00:03:24.113 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.113 CC lib/vhost/rte_vhost_user.o 00:03:24.113 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.113 CC lib/iscsi/iscsi_rpc.o 00:03:24.113 CC lib/iscsi/task.o 00:03:24.113 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.371 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.371 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.371 CC lib/ftl/utils/ftl_conf.o 00:03:24.371 CC lib/ftl/utils/ftl_md.o 00:03:24.371 CC lib/ftl/utils/ftl_mempool.o 00:03:24.629 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.629 CC lib/ftl/utils/ftl_property.o 00:03:24.629 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.629 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.629 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.887 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.887 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.887 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.887 LIB libspdk_nvmf.a 00:03:24.887 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.887 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.887 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.887 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.887 CC lib/ftl/base/ftl_base_dev.o 00:03:24.887 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.887 CC lib/ftl/ftl_trace.o 00:03:24.887 LIB libspdk_iscsi.a 00:03:25.145 LIB libspdk_vhost.a 00:03:25.440 LIB libspdk_ftl.a 00:03:25.725 CC module/env_dpdk/env_dpdk_rpc.o 00:03:25.725 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.726 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:25.726 CC module/accel/iaa/accel_iaa.o 00:03:25.726 CC module/sock/posix/posix.o 00:03:25.726 CC module/accel/dsa/accel_dsa.o 00:03:25.726 CC module/accel/error/accel_error.o 00:03:25.726 CC module/accel/ioat/accel_ioat.o 00:03:25.726 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:25.726 CC module/blob/bdev/blob_bdev.o 00:03:25.726 LIB libspdk_env_dpdk_rpc.a 00:03:25.726 LIB libspdk_scheduler_gscheduler.a 00:03:25.726 LIB libspdk_scheduler_dpdk_governor.a 00:03:25.726 CC module/accel/ioat/accel_ioat_rpc.o 00:03:25.726 CC module/accel/iaa/accel_iaa_rpc.o 00:03:25.984 CC module/accel/error/accel_error_rpc.o 00:03:25.984 CC module/accel/dsa/accel_dsa_rpc.o 00:03:25.984 LIB libspdk_scheduler_dynamic.a 00:03:25.984 LIB libspdk_accel_ioat.a 00:03:25.984 LIB libspdk_blob_bdev.a 00:03:25.984 LIB libspdk_accel_iaa.a 00:03:25.984 LIB libspdk_accel_dsa.a 00:03:25.984 LIB libspdk_accel_error.a 00:03:25.984 CC module/bdev/error/vbdev_error.o 00:03:25.984 CC module/bdev/delay/vbdev_delay.o 00:03:25.984 CC module/bdev/null/bdev_null.o 00:03:25.984 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.984 CC module/bdev/malloc/bdev_malloc.o 00:03:25.984 CC module/blobfs/bdev/blobfs_bdev.o 00:03:26.243 CC module/bdev/gpt/gpt.o 00:03:26.243 CC module/bdev/nvme/bdev_nvme.o 00:03:26.243 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.243 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:26.243 CC module/bdev/gpt/vbdev_gpt.o 00:03:26.501 CC module/bdev/null/bdev_null_rpc.o 00:03:26.501 CC module/bdev/error/vbdev_error_rpc.o 00:03:26.501 LIB libspdk_blobfs_bdev.a 00:03:26.501 LIB libspdk_sock_posix.a 00:03:26.501 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:26.501 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:26.501 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:26.501 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:26.501 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:26.501 LIB libspdk_bdev_null.a 00:03:26.501 LIB libspdk_bdev_error.a 00:03:26.760 LIB libspdk_bdev_gpt.a 00:03:26.760 CC module/bdev/nvme/nvme_rpc.o 00:03:26.760 LIB libspdk_bdev_passthru.a 00:03:26.760 LIB libspdk_bdev_delay.a 00:03:26.760 CC module/bdev/raid/bdev_raid.o 00:03:26.760 LIB libspdk_bdev_malloc.a 00:03:26.760 CC module/bdev/split/vbdev_split.o 00:03:26.760 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.760 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.760 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.760 CC module/bdev/aio/bdev_aio.o 00:03:26.760 LIB libspdk_bdev_lvol.a 00:03:27.018 CC module/bdev/raid/raid0.o 00:03:27.018 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:27.018 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.018 CC module/bdev/aio/bdev_aio_rpc.o 00:03:27.018 CC module/bdev/raid/raid1.o 00:03:27.018 CC module/bdev/raid/concat.o 00:03:27.018 LIB libspdk_bdev_split.a 00:03:27.018 LIB libspdk_bdev_zone_block.a 00:03:27.018 CC module/bdev/raid/raid5f.o 00:03:27.277 LIB libspdk_bdev_aio.a 00:03:27.277 CC module/bdev/nvme/bdev_mdns_client.o 00:03:27.277 CC module/bdev/nvme/vbdev_opal.o 00:03:27.277 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:27.277 CC module/bdev/ftl/bdev_ftl.o 00:03:27.277 CC module/bdev/iscsi/bdev_iscsi.o 00:03:27.277 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:27.277 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:27.277 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:27.535 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:27.535 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.535 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:27.794 LIB libspdk_bdev_iscsi.a 00:03:27.794 LIB libspdk_bdev_ftl.a 00:03:27.794 LIB libspdk_bdev_raid.a 00:03:28.052 LIB libspdk_bdev_virtio.a 00:03:28.618 LIB libspdk_bdev_nvme.a 00:03:28.876 CC module/event/subsystems/sock/sock.o 00:03:28.876 CC module/event/subsystems/vmd/vmd.o 00:03:28.876 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.876 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.876 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.876 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.876 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:29.134 LIB libspdk_event_scheduler.a 00:03:29.134 LIB libspdk_event_vmd.a 00:03:29.134 LIB libspdk_event_sock.a 00:03:29.134 LIB libspdk_event_iobuf.a 00:03:29.134 LIB libspdk_event_vhost_blk.a 00:03:29.392 CC module/event/subsystems/accel/accel.o 00:03:29.392 LIB libspdk_event_accel.a 00:03:29.649 CC module/event/subsystems/bdev/bdev.o 00:03:29.649 LIB libspdk_event_bdev.a 00:03:29.906 CC module/event/subsystems/nbd/nbd.o 00:03:29.906 CC module/event/subsystems/scsi/scsi.o 00:03:29.906 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.906 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.906 LIB libspdk_event_nbd.a 00:03:30.162 LIB libspdk_event_scsi.a 00:03:30.162 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.162 LIB libspdk_event_nvmf.a 00:03:30.162 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.420 LIB libspdk_event_vhost_scsi.a 00:03:30.420 LIB libspdk_event_iscsi.a 00:03:30.420 CXX app/trace/trace.o 00:03:30.679 CC app/trace_record/trace_record.o 00:03:30.679 CC app/iscsi_tgt/iscsi_tgt.o 00:03:30.679 CC app/nvmf_tgt/nvmf_main.o 00:03:30.679 CC examples/accel/perf/accel_perf.o 00:03:30.679 CC examples/ioat/perf/perf.o 00:03:30.679 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.679 CC test/accel/dif/dif.o 00:03:30.679 CC app/spdk_tgt/spdk_tgt.o 00:03:30.679 CC examples/blob/hello_world/hello_blob.o 00:03:30.938 LINK spdk_trace_record 00:03:30.938 LINK nvmf_tgt 00:03:30.938 LINK iscsi_tgt 00:03:30.938 LINK ioat_perf 00:03:30.938 LINK hello_bdev 00:03:30.938 LINK spdk_tgt 00:03:30.938 LINK hello_blob 00:03:31.197 LINK spdk_trace 00:03:31.197 LINK accel_perf 00:03:31.197 LINK dif 00:03:31.456 CC examples/ioat/verify/verify.o 00:03:31.456 CC examples/blob/cli/blobcli.o 00:03:31.714 CC test/app/bdev_svc/bdev_svc.o 00:03:31.714 LINK verify 00:03:31.714 LINK bdev_svc 00:03:32.292 LINK blobcli 00:03:32.292 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.550 CC test/app/histogram_perf/histogram_perf.o 00:03:32.550 LINK histogram_perf 00:03:32.809 LINK nvme_fuzz 00:03:33.375 CC test/bdev/bdevio/bdevio.o 00:03:33.940 CC examples/bdev/bdevperf/bdevperf.o 00:03:33.940 LINK bdevio 00:03:33.941 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.198 CC test/blobfs/mkfs/mkfs.o 00:03:34.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.198 CC examples/nvme/hello_world/hello_world.o 00:03:34.198 CC examples/sock/hello_world/hello_sock.o 00:03:34.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.198 LINK mkfs 00:03:34.455 LINK hello_world 00:03:34.455 LINK hello_sock 00:03:34.711 CC app/spdk_lspci/spdk_lspci.o 00:03:34.711 LINK vhost_fuzz 00:03:34.711 CC app/spdk_nvme_perf/perf.o 00:03:34.968 LINK bdevperf 00:03:34.968 LINK spdk_lspci 00:03:35.226 CC app/spdk_nvme_identify/identify.o 00:03:35.483 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.483 LINK spdk_nvme_discover 00:03:35.741 CC app/spdk_top/spdk_top.o 00:03:35.741 CC examples/nvme/reconnect/reconnect.o 00:03:35.741 LINK spdk_nvme_perf 00:03:35.998 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.998 LINK reconnect 00:03:36.256 LINK lsvmd 00:03:36.514 LINK spdk_nvme_identify 00:03:36.514 LINK iscsi_fuzz 00:03:36.772 CC examples/vmd/led/led.o 00:03:36.772 LINK spdk_top 00:03:37.029 LINK led 00:03:37.029 CC examples/nvmf/nvmf/nvmf.o 00:03:37.594 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.594 CC examples/idxd/perf/perf.o 00:03:37.594 CC examples/util/zipf/zipf.o 00:03:37.594 LINK nvmf 00:03:37.594 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.594 CC examples/thread/thread/thread_ex.o 00:03:37.594 LINK zipf 00:03:37.594 LINK interrupt_tgt 00:03:38.169 CC examples/nvme/arbitration/arbitration.o 00:03:38.169 CC examples/nvme/hotplug/hotplug.o 00:03:38.169 CC test/app/jsoncat/jsoncat.o 00:03:38.169 LINK idxd_perf 00:03:38.169 LINK thread 00:03:38.169 LINK nvme_manage 00:03:38.169 CC app/vhost/vhost.o 00:03:38.169 LINK jsoncat 00:03:38.169 CC app/spdk_dd/spdk_dd.o 00:03:38.480 LINK hotplug 00:03:38.480 LINK vhost 00:03:38.480 LINK arbitration 00:03:38.738 LINK spdk_dd 00:03:38.738 CC app/fio/nvme/fio_plugin.o 00:03:38.996 CC test/app/stub/stub.o 00:03:39.254 LINK stub 00:03:39.254 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:39.512 LINK spdk_nvme 00:03:39.512 LINK cmb_copy 00:03:40.078 TEST_HEADER include/spdk/accel_module.h 00:03:40.078 TEST_HEADER include/spdk/bit_pool.h 00:03:40.078 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.078 TEST_HEADER include/spdk/ioat.h 00:03:40.078 CC examples/nvme/abort/abort.o 00:03:40.078 TEST_HEADER include/spdk/blobfs.h 00:03:40.078 TEST_HEADER include/spdk/notify.h 00:03:40.078 TEST_HEADER include/spdk/pipe.h 00:03:40.078 TEST_HEADER include/spdk/accel.h 00:03:40.078 TEST_HEADER include/spdk/file.h 00:03:40.078 TEST_HEADER include/spdk/version.h 00:03:40.078 TEST_HEADER include/spdk/trace_parser.h 00:03:40.078 TEST_HEADER include/spdk/opal_spec.h 00:03:40.078 TEST_HEADER include/spdk/uuid.h 00:03:40.078 TEST_HEADER include/spdk/likely.h 00:03:40.078 TEST_HEADER include/spdk/dif.h 00:03:40.078 TEST_HEADER include/spdk/memory.h 00:03:40.078 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.078 TEST_HEADER include/spdk/dma.h 00:03:40.078 TEST_HEADER include/spdk/nbd.h 00:03:40.078 TEST_HEADER include/spdk/conf.h 00:03:40.078 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.078 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.078 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.078 TEST_HEADER include/spdk/mmio.h 00:03:40.078 TEST_HEADER include/spdk/json.h 00:03:40.078 TEST_HEADER include/spdk/opal.h 00:03:40.078 TEST_HEADER include/spdk/bdev.h 00:03:40.078 TEST_HEADER include/spdk/base64.h 00:03:40.078 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.078 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.078 TEST_HEADER include/spdk/fd.h 00:03:40.078 TEST_HEADER include/spdk/barrier.h 00:03:40.078 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.078 TEST_HEADER include/spdk/zipf.h 00:03:40.078 TEST_HEADER include/spdk/nvmf.h 00:03:40.078 TEST_HEADER include/spdk/queue.h 00:03:40.078 TEST_HEADER include/spdk/xor.h 00:03:40.078 TEST_HEADER include/spdk/cpuset.h 00:03:40.078 TEST_HEADER include/spdk/thread.h 00:03:40.078 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.078 TEST_HEADER include/spdk/fd_group.h 00:03:40.078 TEST_HEADER include/spdk/tree.h 00:03:40.078 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.078 TEST_HEADER include/spdk/crc64.h 00:03:40.078 TEST_HEADER include/spdk/assert.h 00:03:40.078 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.078 TEST_HEADER include/spdk/endian.h 00:03:40.078 TEST_HEADER include/spdk/pci_ids.h 00:03:40.078 TEST_HEADER include/spdk/log.h 00:03:40.078 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.078 TEST_HEADER include/spdk/ftl.h 00:03:40.078 TEST_HEADER include/spdk/config.h 00:03:40.078 TEST_HEADER include/spdk/vhost.h 00:03:40.078 TEST_HEADER include/spdk/bdev_module.h 00:03:40.078 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.078 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.078 TEST_HEADER include/spdk/crc16.h 00:03:40.078 TEST_HEADER include/spdk/nvme.h 00:03:40.078 TEST_HEADER include/spdk/stdinc.h 00:03:40.078 TEST_HEADER include/spdk/scsi.h 00:03:40.078 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.078 TEST_HEADER include/spdk/idxd.h 00:03:40.335 TEST_HEADER include/spdk/hexlify.h 00:03:40.335 TEST_HEADER include/spdk/reduce.h 00:03:40.335 TEST_HEADER include/spdk/crc32.h 00:03:40.335 TEST_HEADER include/spdk/init.h 00:03:40.335 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.335 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.335 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.335 TEST_HEADER include/spdk/util.h 00:03:40.335 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.335 TEST_HEADER include/spdk/env.h 00:03:40.335 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.335 TEST_HEADER include/spdk/lvol.h 00:03:40.335 TEST_HEADER include/spdk/histogram_data.h 00:03:40.335 TEST_HEADER include/spdk/event.h 00:03:40.335 TEST_HEADER include/spdk/trace.h 00:03:40.335 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.335 TEST_HEADER include/spdk/string.h 00:03:40.335 TEST_HEADER include/spdk/ublk.h 00:03:40.335 TEST_HEADER include/spdk/bit_array.h 00:03:40.335 TEST_HEADER include/spdk/scheduler.h 00:03:40.335 TEST_HEADER include/spdk/blob.h 00:03:40.335 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.335 TEST_HEADER include/spdk/sock.h 00:03:40.335 TEST_HEADER include/spdk/vmd.h 00:03:40.335 TEST_HEADER include/spdk/rpc.h 00:03:40.335 CXX test/cpp_headers/accel_module.o 00:03:40.335 LINK pmr_persistence 00:03:40.335 CXX test/cpp_headers/bit_pool.o 00:03:40.592 CC test/dma/test_dma/test_dma.o 00:03:40.592 CC app/fio/bdev/fio_plugin.o 00:03:40.592 CXX test/cpp_headers/ioat.o 00:03:40.592 LINK abort 00:03:40.850 CXX test/cpp_headers/blobfs.o 00:03:40.850 LINK test_dma 00:03:40.850 CXX test/cpp_headers/notify.o 00:03:41.107 CXX test/cpp_headers/pipe.o 00:03:41.107 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.107 CC test/env/vtophys/vtophys.o 00:03:41.107 CXX test/cpp_headers/accel.o 00:03:41.107 LINK spdk_bdev 00:03:41.365 LINK vtophys 00:03:41.365 CXX test/cpp_headers/file.o 00:03:41.365 CXX test/cpp_headers/version.o 00:03:41.365 LINK mem_callbacks 00:03:41.623 CC test/env/memory/memory_ut.o 00:03:41.623 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.623 CC test/env/pci/pci_ut.o 00:03:41.623 CXX test/cpp_headers/trace_parser.o 00:03:41.623 LINK env_dpdk_post_init 00:03:41.623 CXX test/cpp_headers/opal_spec.o 00:03:41.881 CXX test/cpp_headers/uuid.o 00:03:41.881 CC test/event/event_perf/event_perf.o 00:03:42.140 CC test/lvol/esnap/esnap.o 00:03:42.140 LINK pci_ut 00:03:42.140 CC test/event/reactor/reactor.o 00:03:42.140 CXX test/cpp_headers/likely.o 00:03:42.140 CC test/event/reactor_perf/reactor_perf.o 00:03:42.140 LINK event_perf 00:03:42.140 LINK reactor 00:03:42.140 LINK reactor_perf 00:03:42.397 CXX test/cpp_headers/dif.o 00:03:42.397 LINK memory_ut 00:03:42.397 CXX test/cpp_headers/memory.o 00:03:42.655 CC test/event/app_repeat/app_repeat.o 00:03:42.655 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.655 CC test/event/scheduler/scheduler.o 00:03:42.655 LINK app_repeat 00:03:42.915 CXX test/cpp_headers/dma.o 00:03:42.915 CXX test/cpp_headers/nbd.o 00:03:42.915 CXX test/cpp_headers/conf.o 00:03:42.915 LINK scheduler 00:03:42.915 CXX test/cpp_headers/env_dpdk.o 00:03:42.915 CC test/nvme/aer/aer.o 00:03:43.173 CC test/rpc_client/rpc_client_test.o 00:03:43.173 CXX test/cpp_headers/nvmf_spec.o 00:03:43.173 CC test/thread/poller_perf/poller_perf.o 00:03:43.173 LINK rpc_client_test 00:03:43.173 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:43.430 LINK aer 00:03:43.430 LINK poller_perf 00:03:43.430 CXX test/cpp_headers/iscsi_spec.o 00:03:43.430 CXX test/cpp_headers/mmio.o 00:03:43.430 LINK histogram_ut 00:03:43.688 CXX test/cpp_headers/json.o 00:03:43.688 CXX test/cpp_headers/opal.o 00:03:44.254 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:44.254 CC test/nvme/reset/reset.o 00:03:44.254 CC test/nvme/sgl/sgl.o 00:03:44.254 CXX test/cpp_headers/bdev.o 00:03:44.254 CC test/thread/lock/spdk_lock.o 00:03:44.254 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:44.254 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:44.254 CXX test/cpp_headers/base64.o 00:03:44.511 CXX test/cpp_headers/blobfs_bdev.o 00:03:44.511 LINK reset 00:03:44.511 LINK sgl 00:03:44.511 CC test/nvme/e2edp/nvme_dp.o 00:03:44.511 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.769 CXX test/cpp_headers/fd.o 00:03:44.769 LINK nvme_dp 00:03:45.026 LINK blob_bdev_ut 00:03:45.026 CXX test/cpp_headers/barrier.o 00:03:45.026 CXX test/cpp_headers/scsi_spec.o 00:03:45.284 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:45.284 CXX test/cpp_headers/zipf.o 00:03:45.559 CXX test/cpp_headers/nvmf.o 00:03:45.559 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:45.821 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:45.821 CXX test/cpp_headers/queue.o 00:03:45.821 CXX test/cpp_headers/xor.o 00:03:46.079 LINK scsi_nvme_ut 00:03:46.079 CC test/nvme/overhead/overhead.o 00:03:46.079 CXX test/cpp_headers/cpuset.o 00:03:46.337 CXX test/cpp_headers/thread.o 00:03:46.337 LINK spdk_lock 00:03:46.595 CXX test/cpp_headers/bdev_zone.o 00:03:46.595 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:46.595 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:46.595 LINK overhead 00:03:46.595 CXX test/cpp_headers/fd_group.o 00:03:46.854 CXX test/cpp_headers/tree.o 00:03:46.854 LINK accel_ut 00:03:46.854 LINK gpt_ut 00:03:46.854 CXX test/cpp_headers/blob_bdev.o 00:03:47.113 CXX test/cpp_headers/crc64.o 00:03:47.113 CXX test/cpp_headers/assert.o 00:03:47.372 CXX test/cpp_headers/nvme_spec.o 00:03:47.372 CXX test/cpp_headers/endian.o 00:03:47.372 CC test/nvme/err_injection/err_injection.o 00:03:47.372 CC test/nvme/startup/startup.o 00:03:47.631 CXX test/cpp_headers/pci_ids.o 00:03:47.631 CC test/nvme/reserve/reserve.o 00:03:47.631 LINK err_injection 00:03:47.631 CXX test/cpp_headers/log.o 00:03:47.631 LINK vbdev_lvol_ut 00:03:47.631 LINK startup 00:03:47.631 CC test/nvme/simple_copy/simple_copy.o 00:03:47.889 LINK reserve 00:03:47.889 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:47.889 LINK esnap 00:03:47.889 CXX test/cpp_headers/ftl.o 00:03:48.147 LINK simple_copy 00:03:48.147 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:48.147 CXX test/cpp_headers/config.o 00:03:48.147 CXX test/cpp_headers/vhost.o 00:03:48.406 CXX test/cpp_headers/bdev_module.o 00:03:48.406 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:48.664 CXX test/cpp_headers/nvme_intel.o 00:03:48.664 CXX test/cpp_headers/idxd_spec.o 00:03:48.664 CXX test/cpp_headers/crc16.o 00:03:48.923 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:48.923 CXX test/cpp_headers/nvme.o 00:03:48.923 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:48.923 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:49.182 CXX test/cpp_headers/stdinc.o 00:03:49.182 LINK tree_ut 00:03:49.182 LINK bdev_zone_ut 00:03:49.440 CXX test/cpp_headers/scsi.o 00:03:49.440 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:49.440 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:49.440 LINK part_ut 00:03:49.698 CC test/nvme/connect_stress/connect_stress.o 00:03:49.699 CXX test/cpp_headers/idxd.o 00:03:49.699 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:49.699 CXX test/cpp_headers/hexlify.o 00:03:49.957 LINK connect_stress 00:03:49.957 LINK vbdev_zone_block_ut 00:03:49.957 CXX test/cpp_headers/reduce.o 00:03:50.216 CXX test/cpp_headers/crc32.o 00:03:50.216 LINK bdev_ut 00:03:50.474 CXX test/cpp_headers/init.o 00:03:50.474 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:50.474 CC test/nvme/boot_partition/boot_partition.o 00:03:50.474 CXX test/cpp_headers/nvmf_transport.o 00:03:50.733 LINK boot_partition 00:03:50.733 LINK bdev_raid_ut 00:03:50.733 LINK blobfs_bdev_ut 00:03:50.733 CXX test/cpp_headers/nvme_zns.o 00:03:50.733 CC test/nvme/compliance/nvme_compliance.o 00:03:50.991 LINK blobfs_async_ut 00:03:50.991 CC test/nvme/fused_ordering/fused_ordering.o 00:03:50.991 CXX test/cpp_headers/vfio_user_spec.o 00:03:50.991 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:50.991 LINK blobfs_sync_ut 00:03:50.991 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:50.991 CXX test/cpp_headers/util.o 00:03:50.991 LINK fused_ordering 00:03:51.250 LINK nvme_compliance 00:03:51.250 CXX test/cpp_headers/jsonrpc.o 00:03:51.250 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:51.250 CXX test/cpp_headers/env.o 00:03:51.508 LINK bdev_raid_sb_ut 00:03:51.508 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:51.508 CXX test/cpp_headers/nvmf_cmd.o 00:03:51.508 LINK concat_ut 00:03:51.767 LINK doorbell_aers 00:03:51.767 CXX test/cpp_headers/lvol.o 00:03:51.767 CC test/nvme/fdp/fdp.o 00:03:51.767 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:51.767 LINK raid1_ut 00:03:51.767 CXX test/cpp_headers/histogram_data.o 00:03:52.025 CC test/nvme/cuse/cuse.o 00:03:52.025 CXX test/cpp_headers/event.o 00:03:52.025 LINK fdp 00:03:52.025 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:52.284 CXX test/cpp_headers/trace.o 00:03:52.284 CXX test/cpp_headers/ioat_spec.o 00:03:52.284 LINK bdev_ut 00:03:52.284 CXX test/cpp_headers/string.o 00:03:52.284 CXX test/cpp_headers/ublk.o 00:03:52.543 CXX test/cpp_headers/bit_array.o 00:03:52.543 CXX test/cpp_headers/scheduler.o 00:03:52.543 CXX test/cpp_headers/blob.o 00:03:52.543 CXX test/cpp_headers/gpt_spec.o 00:03:52.543 CXX test/cpp_headers/sock.o 00:03:52.543 CXX test/cpp_headers/vmd.o 00:03:52.543 CXX test/cpp_headers/rpc.o 00:03:52.801 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:52.801 LINK cuse 00:03:52.801 CC test/unit/lib/event/app.c/app_ut.o 00:03:52.801 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:52.801 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:52.801 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:53.078 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:53.079 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:53.079 LINK dma_ut 00:03:53.079 LINK blob_ut 00:03:53.364 LINK raid5f_ut 00:03:53.364 LINK ioat_ut 00:03:53.364 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:53.364 LINK init_grp_ut 00:03:53.623 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:53.623 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:53.623 LINK app_ut 00:03:53.623 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:53.623 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:53.623 LINK reactor_ut 00:03:53.882 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:54.140 LINK jsonrpc_server_ut 00:03:54.140 LINK conn_ut 00:03:54.140 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:54.140 LINK json_util_ut 00:03:54.140 LINK param_ut 00:03:54.398 LINK json_write_ut 00:03:54.398 CC test/unit/lib/log/log.c/log_ut.o 00:03:54.398 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:54.398 LINK portal_grp_ut 00:03:54.398 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:54.398 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:54.654 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:54.654 LINK log_ut 00:03:54.912 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:54.912 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:54.912 LINK notify_ut 00:03:54.912 LINK tgt_node_ut 00:03:54.912 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:55.170 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:55.735 LINK iscsi_ut 00:03:55.735 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:55.993 LINK nvme_ut 00:03:55.993 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:55.993 LINK json_parse_ut 00:03:55.993 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:55.993 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:56.250 LINK nvme_ctrlr_cmd_ut 00:03:56.250 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:56.250 LINK lvol_ut 00:03:56.250 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:56.508 LINK dev_ut 00:03:56.765 LINK bdev_nvme_ut 00:03:56.765 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:56.765 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:57.023 LINK nvme_ns_ut 00:03:57.023 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:57.280 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:57.280 LINK scsi_ut 00:03:57.539 LINK lun_ut 00:03:57.539 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:57.539 LINK ctrlr_bdev_ut 00:03:57.796 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:57.797 LINK nvme_ctrlr_ut 00:03:57.797 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:58.054 LINK nvme_ns_cmd_ut 00:03:58.054 LINK subsystem_ut 00:03:58.054 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:58.312 LINK ctrlr_discovery_ut 00:03:58.312 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:58.312 LINK scsi_bdev_ut 00:03:58.312 LINK scsi_pr_ut 00:03:58.312 LINK ctrlr_ut 00:03:58.570 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:58.570 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:58.570 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:58.570 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:58.828 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:58.828 LINK nvmf_ut 00:03:59.086 LINK tcp_ut 00:03:59.086 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:59.086 LINK nvme_quirks_ut 00:03:59.344 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:59.344 LINK nvme_poll_group_ut 00:03:59.602 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:59.602 LINK nvme_ns_ocssd_cmd_ut 00:03:59.603 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:59.861 LINK nvme_qpair_ut 00:03:59.861 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:59.861 LINK nvme_transport_ut 00:03:59.861 LINK nvme_io_msg_ut 00:04:00.119 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:00.119 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:00.119 LINK nvme_pcie_ut 00:04:00.119 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:00.378 LINK nvme_fabric_ut 00:04:00.378 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:00.636 LINK nvme_opal_ut 00:04:00.636 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:00.636 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:00.895 LINK nvme_pcie_common_ut 00:04:01.154 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:01.154 LINK nvme_tcp_ut 00:04:01.413 LINK rdma_ut 00:04:01.413 LINK base64_ut 00:04:01.413 LINK iobuf_ut 00:04:01.413 LINK posix_ut 00:04:01.675 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:01.675 LINK nvme_cuse_ut 00:04:01.675 LINK sock_ut 00:04:01.675 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:01.675 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:01.675 LINK transport_ut 00:04:01.675 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:01.945 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:01.945 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:01.945 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:01.945 LINK cpuset_ut 00:04:01.945 LINK crc16_ut 00:04:01.945 LINK crc32_ieee_ut 00:04:01.945 LINK pci_event_ut 00:04:01.945 LINK bit_array_ut 00:04:01.945 LINK crc32c_ut 00:04:02.206 LINK nvme_rdma_ut 00:04:02.206 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:02.206 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:02.206 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:02.464 LINK subsystem_ut 00:04:02.464 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:02.464 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:02.464 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:02.464 LINK crc64_ut 00:04:02.464 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:02.464 CC test/unit/lib/util/math.c/math_ut.o 00:04:02.721 LINK iov_ut 00:04:02.721 CC test/unit/lib/util/string.c/string_ut.o 00:04:02.721 LINK math_ut 00:04:02.721 LINK rpc_ut 00:04:02.979 LINK thread_ut 00:04:02.979 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:02.979 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:02.979 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:02.979 LINK string_ut 00:04:03.237 LINK idxd_user_ut 00:04:03.237 LINK pipe_ut 00:04:03.237 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:03.237 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:03.237 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:03.237 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:03.237 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:03.494 LINK xor_ut 00:04:03.494 LINK ftl_l2p_ut 00:04:03.494 LINK common_ut 00:04:03.494 LINK ftl_bitmap_ut 00:04:03.751 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:03.751 LINK dif_ut 00:04:03.751 LINK ftl_mempool_ut 00:04:03.751 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:03.751 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:04.009 LINK ftl_io_ut 00:04:04.009 LINK idxd_ut 00:04:04.279 LINK ftl_mngt_ut 00:04:04.539 LINK ftl_band_ut 00:04:04.539 LINK vhost_ut 00:04:05.102 LINK ftl_layout_upgrade_ut 00:04:05.102 LINK ftl_sb_ut 00:04:05.360 00:04:05.360 real 1m55.681s 00:04:05.360 user 10m1.050s 00:04:05.360 sys 1m46.161s 00:04:05.360 ************************************ 00:04:05.360 END TEST unittest_build 00:04:05.360 ************************************ 00:04:05.360 22:12:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:05.360 22:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.360 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:04:05.360 22:12:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:05.360 22:12:04 -- nvmf/common.sh@7 -- # uname -s 00:04:05.360 22:12:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.360 22:12:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.360 22:12:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.360 22:12:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.360 22:12:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.360 22:12:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.361 22:12:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.361 22:12:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.361 22:12:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.361 22:12:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.361 22:12:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:332b5754-f6a4-4fbc-b9af-db96909f34b4 00:04:05.361 22:12:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=332b5754-f6a4-4fbc-b9af-db96909f34b4 00:04:05.361 22:12:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.361 22:12:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.361 22:12:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.361 22:12:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:05.361 22:12:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.361 22:12:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.361 22:12:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.361 22:12:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:05.361 22:12:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:05.361 22:12:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:05.361 22:12:04 -- paths/export.sh@5 -- # export PATH 00:04:05.361 22:12:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:05.361 22:12:04 -- nvmf/common.sh@46 -- # : 0 00:04:05.361 22:12:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:05.361 22:12:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:05.361 22:12:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:05.361 22:12:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.361 22:12:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.361 22:12:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:05.361 22:12:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:05.361 22:12:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:05.361 22:12:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.361 22:12:04 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.619 22:12:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.619 22:12:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:05.619 22:12:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.619 22:12:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.619 22:12:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.619 22:12:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:06.185 22:12:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:06.185 22:12:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:06.185 22:12:04 -- spdk/autotest.sh@48 -- # udevadm_pid=93840 00:04:06.185 22:12:04 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:06.185 22:12:04 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:06.185 22:12:04 -- spdk/autotest.sh@54 -- # echo 93873 00:04:06.185 22:12:04 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:06.185 22:12:05 -- spdk/autotest.sh@56 -- # echo 93946 00:04:06.185 22:12:05 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:06.185 22:12:05 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:06.185 22:12:05 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.185 22:12:05 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:06.185 22:12:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:06.185 22:12:05 -- common/autotest_common.sh@10 -- # set +x 00:04:06.185 22:12:05 -- spdk/autotest.sh@70 -- # create_test_list 00:04:06.185 22:12:05 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:06.185 22:12:05 -- common/autotest_common.sh@10 -- # set +x 00:04:06.185 22:12:05 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:06.185 22:12:05 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:06.185 22:12:05 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:06.185 22:12:05 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:06.185 22:12:05 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:06.185 22:12:05 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:06.185 22:12:05 -- common/autotest_common.sh@1440 -- # uname 00:04:06.185 22:12:05 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:06.185 22:12:05 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:06.185 22:12:05 -- common/autotest_common.sh@1460 -- # uname 00:04:06.185 22:12:05 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:06.185 22:12:05 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:06.185 22:12:05 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:06.185 22:12:05 -- spdk/autotest.sh@83 -- # hash lcov 00:04:06.185 22:12:05 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:06.185 22:12:05 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:06.185 --rc lcov_branch_coverage=1 00:04:06.185 --rc lcov_function_coverage=1 00:04:06.185 --rc genhtml_branch_coverage=1 00:04:06.185 --rc genhtml_function_coverage=1 00:04:06.185 --rc genhtml_legend=1 00:04:06.185 --rc geninfo_all_blocks=1 00:04:06.185 ' 00:04:06.185 22:12:05 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:06.185 --rc lcov_branch_coverage=1 00:04:06.185 --rc lcov_function_coverage=1 00:04:06.185 --rc genhtml_branch_coverage=1 00:04:06.185 --rc genhtml_function_coverage=1 00:04:06.185 --rc genhtml_legend=1 00:04:06.185 --rc geninfo_all_blocks=1 00:04:06.185 ' 00:04:06.185 22:12:05 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:06.185 --rc lcov_branch_coverage=1 00:04:06.185 --rc lcov_function_coverage=1 00:04:06.185 --rc genhtml_branch_coverage=1 00:04:06.185 --rc genhtml_function_coverage=1 00:04:06.185 --rc genhtml_legend=1 00:04:06.185 --rc geninfo_all_blocks=1 00:04:06.185 --no-external' 00:04:06.185 22:12:05 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:06.185 --rc lcov_branch_coverage=1 00:04:06.185 --rc lcov_function_coverage=1 00:04:06.185 --rc genhtml_branch_coverage=1 00:04:06.185 --rc genhtml_function_coverage=1 00:04:06.185 --rc genhtml_legend=1 00:04:06.185 --rc geninfo_all_blocks=1 00:04:06.185 --no-external' 00:04:06.185 22:12:05 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:06.185 lcov: LCOV version 1.15 00:04:06.185 22:12:05 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:08.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:08.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:08.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:08.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:08.348 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:08.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:55.017 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:55.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:55.017 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:55.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:55.017 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:55.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:55.275 22:12:54 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:55.275 22:12:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.275 22:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.275 22:12:54 -- spdk/autotest.sh@102 -- # rm -f 00:04:55.275 22:12:54 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.792 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:55.792 22:12:54 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:55.792 22:12:54 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:55.792 22:12:54 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:55.792 22:12:54 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:55.792 22:12:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:55.792 22:12:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:55.792 22:12:54 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:55.792 22:12:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.792 22:12:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:55.792 22:12:54 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:55.792 22:12:54 -- spdk/autotest.sh@121 -- # grep -v p 00:04:55.792 22:12:54 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:55.792 22:12:54 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:55.792 22:12:54 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:55.792 22:12:54 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:55.792 22:12:54 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:55.792 22:12:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:55.792 No valid GPT data, bailing 00:04:55.792 22:12:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.792 22:12:54 -- scripts/common.sh@393 -- # pt= 00:04:55.792 22:12:54 -- scripts/common.sh@394 -- # return 1 00:04:55.792 22:12:54 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:55.792 1+0 records in 00:04:55.792 1+0 records out 00:04:55.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204718 s, 51.2 MB/s 00:04:55.792 22:12:54 -- spdk/autotest.sh@129 -- # sync 00:04:55.792 22:12:54 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.792 22:12:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.792 22:12:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:57.191 22:12:55 -- spdk/autotest.sh@135 -- # uname -s 00:04:57.191 22:12:55 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:57.191 22:12:55 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:57.191 22:12:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.191 22:12:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.191 22:12:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.191 ************************************ 00:04:57.191 START TEST setup.sh 00:04:57.191 ************************************ 00:04:57.191 22:12:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:57.191 * Looking for test storage... 00:04:57.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.191 22:12:55 -- setup/test-setup.sh@10 -- # uname -s 00:04:57.191 22:12:55 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:57.191 22:12:55 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:57.191 22:12:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.191 22:12:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.191 22:12:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.191 ************************************ 00:04:57.191 START TEST acl 00:04:57.191 ************************************ 00:04:57.191 22:12:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:57.191 * Looking for test storage... 00:04:57.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.191 22:12:56 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:57.191 22:12:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:57.191 22:12:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:57.191 22:12:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:57.191 22:12:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:57.192 22:12:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:57.192 22:12:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:57.192 22:12:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.192 22:12:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:57.192 22:12:56 -- setup/acl.sh@12 -- # devs=() 00:04:57.192 22:12:56 -- setup/acl.sh@12 -- # declare -a devs 00:04:57.192 22:12:56 -- setup/acl.sh@13 -- # drivers=() 00:04:57.192 22:12:56 -- setup/acl.sh@13 -- # declare -A drivers 00:04:57.192 22:12:56 -- setup/acl.sh@51 -- # setup reset 00:04:57.192 22:12:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.192 22:12:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.450 22:12:56 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:57.450 22:12:56 -- setup/acl.sh@16 -- # local dev driver 00:04:57.450 22:12:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.450 22:12:56 -- setup/acl.sh@15 -- # setup output status 00:04:57.450 22:12:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.450 22:12:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:57.709 Hugepages 00:04:57.709 node hugesize free / total 00:04:57.709 22:12:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:57.709 22:12:56 -- setup/acl.sh@19 -- # continue 00:04:57.709 22:12:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.709 00:04:57.709 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:57.709 22:12:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:57.709 22:12:56 -- setup/acl.sh@19 -- # continue 00:04:57.709 22:12:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.709 22:12:56 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:57.709 22:12:56 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:57.709 22:12:56 -- setup/acl.sh@20 -- # continue 00:04:57.709 22:12:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.709 22:12:56 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:57.709 22:12:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:57.709 22:12:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:57.709 22:12:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:57.709 22:12:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:57.709 22:12:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.709 22:12:56 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:57.709 22:12:56 -- setup/acl.sh@54 -- # run_test denied denied 00:04:57.709 22:12:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.709 22:12:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.709 22:12:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.709 ************************************ 00:04:57.709 START TEST denied 00:04:57.709 ************************************ 00:04:57.709 22:12:56 -- common/autotest_common.sh@1104 -- # denied 00:04:57.709 22:12:56 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:57.709 22:12:56 -- setup/acl.sh@38 -- # setup output config 00:04:57.709 22:12:56 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:57.709 22:12:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.709 22:12:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.086 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:59.086 22:12:58 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:59.086 22:12:58 -- setup/acl.sh@28 -- # local dev driver 00:04:59.086 22:12:58 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.086 22:12:58 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:59.086 22:12:58 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:59.086 22:12:58 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.086 22:12:58 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.086 22:12:58 -- setup/acl.sh@41 -- # setup reset 00:04:59.086 22:12:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.086 22:12:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.653 00:04:59.653 real 0m1.768s 00:04:59.653 user 0m0.522s 00:04:59.653 sys 0m1.295s 00:04:59.653 22:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.653 ************************************ 00:04:59.653 END TEST denied 00:04:59.653 ************************************ 00:04:59.653 22:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.653 22:12:58 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:59.653 22:12:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.653 22:12:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.653 22:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.653 ************************************ 00:04:59.653 START TEST allowed 00:04:59.653 ************************************ 00:04:59.653 22:12:58 -- common/autotest_common.sh@1104 -- # allowed 00:04:59.653 22:12:58 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.653 22:12:58 -- setup/acl.sh@45 -- # setup output config 00:04:59.653 22:12:58 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:59.653 22:12:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.653 22:12:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.556 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.556 22:13:00 -- setup/acl.sh@47 -- # verify 00:05:01.556 22:13:00 -- setup/acl.sh@28 -- # local dev driver 00:05:01.556 22:13:00 -- setup/acl.sh@48 -- # setup reset 00:05:01.556 22:13:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.556 22:13:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.556 00:05:01.556 real 0m2.014s 00:05:01.556 user 0m0.496s 00:05:01.556 sys 0m1.477s 00:05:01.556 22:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.556 ************************************ 00:05:01.556 END TEST allowed 00:05:01.556 ************************************ 00:05:01.556 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.556 00:05:01.556 real 0m4.655s 00:05:01.556 user 0m1.545s 00:05:01.556 sys 0m3.154s 00:05:01.556 22:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.556 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.556 ************************************ 00:05:01.556 END TEST acl 00:05:01.556 ************************************ 00:05:01.816 22:13:00 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:01.816 22:13:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.816 22:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.816 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.816 ************************************ 00:05:01.816 START TEST hugepages 00:05:01.816 ************************************ 00:05:01.816 22:13:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:01.816 * Looking for test storage... 00:05:01.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:01.816 22:13:00 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:01.816 22:13:00 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:01.816 22:13:00 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:01.816 22:13:00 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:01.816 22:13:00 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:01.816 22:13:00 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:01.816 22:13:00 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:01.816 22:13:00 -- setup/common.sh@18 -- # local node= 00:05:01.816 22:13:00 -- setup/common.sh@19 -- # local var val 00:05:01.816 22:13:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.816 22:13:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.816 22:13:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.817 22:13:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.817 22:13:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.817 22:13:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3093780 kB' 'MemAvailable: 7410972 kB' 'Buffers: 37576 kB' 'Cached: 4405276 kB' 'SwapCached: 0 kB' 'Active: 1201092 kB' 'Inactive: 3368840 kB' 'Active(anon): 136236 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064856 kB' 'Inactive(file): 3367052 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 145760 kB' 'Mapped: 73428 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 297984 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 90836 kB' 'KernelStack: 4688 kB' 'PageTables: 4024 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028400 kB' 'Committed_AS: 633988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.817 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.817 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # continue 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.818 22:13:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.818 22:13:00 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.818 22:13:00 -- setup/common.sh@33 -- # echo 2048 00:05:01.818 22:13:00 -- setup/common.sh@33 -- # return 0 00:05:01.818 22:13:00 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:01.818 22:13:00 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:01.818 22:13:00 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:01.818 22:13:00 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:01.818 22:13:00 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:01.818 22:13:00 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:01.818 22:13:00 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:01.818 22:13:00 -- setup/hugepages.sh@207 -- # get_nodes 00:05:01.818 22:13:00 -- setup/hugepages.sh@27 -- # local node 00:05:01.818 22:13:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.818 22:13:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:01.818 22:13:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.818 22:13:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.818 22:13:00 -- setup/hugepages.sh@208 -- # clear_hp 00:05:01.818 22:13:00 -- setup/hugepages.sh@37 -- # local node hp 00:05:01.818 22:13:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.818 22:13:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.818 22:13:00 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.818 22:13:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.818 22:13:00 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.818 22:13:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:01.818 22:13:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:01.818 22:13:00 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:01.818 22:13:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.818 22:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.818 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.818 ************************************ 00:05:01.818 START TEST default_setup 00:05:01.818 ************************************ 00:05:01.818 22:13:00 -- common/autotest_common.sh@1104 -- # default_setup 00:05:01.818 22:13:00 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:01.818 22:13:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.818 22:13:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:01.818 22:13:00 -- setup/hugepages.sh@51 -- # shift 00:05:01.818 22:13:00 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:01.818 22:13:00 -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.818 22:13:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.818 22:13:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.818 22:13:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:01.818 22:13:00 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:01.818 22:13:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.818 22:13:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.818 22:13:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.818 22:13:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.818 22:13:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.818 22:13:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:01.818 22:13:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.818 22:13:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:01.818 22:13:00 -- setup/hugepages.sh@73 -- # return 0 00:05:01.818 22:13:00 -- setup/hugepages.sh@137 -- # setup output 00:05:01.818 22:13:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.818 22:13:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.335 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.905 22:13:01 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:02.905 22:13:01 -- setup/hugepages.sh@89 -- # local node 00:05:02.905 22:13:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.905 22:13:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.905 22:13:01 -- setup/hugepages.sh@92 -- # local surp 00:05:02.905 22:13:01 -- setup/hugepages.sh@93 -- # local resv 00:05:02.905 22:13:01 -- setup/hugepages.sh@94 -- # local anon 00:05:02.905 22:13:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.905 22:13:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.905 22:13:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.905 22:13:01 -- setup/common.sh@18 -- # local node= 00:05:02.905 22:13:01 -- setup/common.sh@19 -- # local var val 00:05:02.905 22:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.905 22:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.905 22:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.905 22:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.905 22:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.905 22:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187684 kB' 'MemAvailable: 9505012 kB' 'Buffers: 37576 kB' 'Cached: 4405284 kB' 'SwapCached: 0 kB' 'Active: 1204908 kB' 'Inactive: 3369064 kB' 'Active(anon): 140144 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064764 kB' 'Inactive(file): 3367280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149524 kB' 'Mapped: 73776 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298148 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91000 kB' 'KernelStack: 4596 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 633792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.905 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.905 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.906 22:13:01 -- setup/common.sh@33 -- # echo 0 00:05:02.906 22:13:01 -- setup/common.sh@33 -- # return 0 00:05:02.906 22:13:01 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.906 22:13:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.906 22:13:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.906 22:13:01 -- setup/common.sh@18 -- # local node= 00:05:02.906 22:13:01 -- setup/common.sh@19 -- # local var val 00:05:02.906 22:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.906 22:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.906 22:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.906 22:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.906 22:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.906 22:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.906 22:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187944 kB' 'MemAvailable: 9505272 kB' 'Buffers: 37576 kB' 'Cached: 4405284 kB' 'SwapCached: 0 kB' 'Active: 1205168 kB' 'Inactive: 3369064 kB' 'Active(anon): 140404 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064764 kB' 'Inactive(file): 3367280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149524 kB' 'Mapped: 73776 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298148 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91000 kB' 'KernelStack: 4596 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 633360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.906 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.906 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.907 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.907 22:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.908 22:13:01 -- setup/common.sh@33 -- # echo 0 00:05:02.908 22:13:01 -- setup/common.sh@33 -- # return 0 00:05:02.908 22:13:01 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.908 22:13:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.908 22:13:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.908 22:13:01 -- setup/common.sh@18 -- # local node= 00:05:02.908 22:13:01 -- setup/common.sh@19 -- # local var val 00:05:02.908 22:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.908 22:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.908 22:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.908 22:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.908 22:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.908 22:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5188204 kB' 'MemAvailable: 9505532 kB' 'Buffers: 37576 kB' 'Cached: 4405284 kB' 'SwapCached: 0 kB' 'Active: 1204908 kB' 'Inactive: 3369064 kB' 'Active(anon): 140144 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064764 kB' 'Inactive(file): 3367280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149396 kB' 'Mapped: 73776 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298148 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91000 kB' 'KernelStack: 4596 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 633360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.908 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.908 22:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.909 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.909 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.910 22:13:01 -- setup/common.sh@33 -- # echo 0 00:05:02.910 22:13:01 -- setup/common.sh@33 -- # return 0 00:05:02.910 22:13:01 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.910 nr_hugepages=1024 00:05:02.910 22:13:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.910 22:13:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.910 resv_hugepages=0 00:05:02.910 surplus_hugepages=0 00:05:02.910 22:13:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.910 anon_hugepages=0 00:05:02.910 22:13:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.910 22:13:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.910 22:13:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.910 22:13:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.910 22:13:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.910 22:13:01 -- setup/common.sh@18 -- # local node= 00:05:02.910 22:13:01 -- setup/common.sh@19 -- # local var val 00:05:02.910 22:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.910 22:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.910 22:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.910 22:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.910 22:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.910 22:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5188140 kB' 'MemAvailable: 9505472 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1204792 kB' 'Inactive: 3369068 kB' 'Active(anon): 140028 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064764 kB' 'Inactive(file): 3367284 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149308 kB' 'Mapped: 73728 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298156 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91008 kB' 'KernelStack: 4628 kB' 'PageTables: 3704 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 636696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.910 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.910 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.911 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.911 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.912 22:13:01 -- setup/common.sh@33 -- # echo 1024 00:05:02.912 22:13:01 -- setup/common.sh@33 -- # return 0 00:05:02.912 22:13:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.912 22:13:01 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.912 22:13:01 -- setup/hugepages.sh@27 -- # local node 00:05:02.912 22:13:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.912 22:13:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.912 22:13:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.912 22:13:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.912 22:13:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.912 22:13:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.912 22:13:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.912 22:13:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.912 22:13:01 -- setup/common.sh@18 -- # local node=0 00:05:02.912 22:13:01 -- setup/common.sh@19 -- # local var val 00:05:02.912 22:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.912 22:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.912 22:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.912 22:13:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.912 22:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.912 22:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5188400 kB' 'MemUsed: 7062704 kB' 'Active: 1204792 kB' 'Inactive: 3369068 kB' 'Active(anon): 140028 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064764 kB' 'Inactive(file): 3367284 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 4442864 kB' 'Mapped: 73728 kB' 'AnonPages: 149440 kB' 'Shmem: 2616 kB' 'KernelStack: 4628 kB' 'PageTables: 3704 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207148 kB' 'Slab: 298156 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.912 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.912 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # continue 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.913 22:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.913 22:13:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.913 22:13:01 -- setup/common.sh@33 -- # echo 0 00:05:02.913 22:13:01 -- setup/common.sh@33 -- # return 0 00:05:02.913 22:13:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.913 22:13:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.913 22:13:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.913 22:13:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.913 node0=1024 expecting 1024 00:05:02.913 22:13:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.913 22:13:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.913 00:05:02.913 real 0m1.107s 00:05:02.913 user 0m0.297s 00:05:02.913 sys 0m0.782s 00:05:02.913 22:13:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.913 22:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.913 ************************************ 00:05:02.913 END TEST default_setup 00:05:02.913 ************************************ 00:05:02.913 22:13:01 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:02.913 22:13:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.913 22:13:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.913 22:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.913 ************************************ 00:05:02.913 START TEST per_node_1G_alloc 00:05:02.913 ************************************ 00:05:02.913 22:13:01 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:02.913 22:13:01 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:02.913 22:13:01 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:02.913 22:13:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.913 22:13:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:02.913 22:13:01 -- setup/hugepages.sh@51 -- # shift 00:05:02.913 22:13:01 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:02.913 22:13:01 -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.913 22:13:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.913 22:13:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.913 22:13:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:02.913 22:13:01 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:02.913 22:13:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.913 22:13:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.913 22:13:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.913 22:13:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.913 22:13:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.914 22:13:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:02.914 22:13:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.914 22:13:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.914 22:13:01 -- setup/hugepages.sh@73 -- # return 0 00:05:02.914 22:13:01 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:02.914 22:13:01 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:02.914 22:13:01 -- setup/hugepages.sh@146 -- # setup output 00:05:02.914 22:13:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.914 22:13:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.431 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.693 22:13:02 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:03.693 22:13:02 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:03.693 22:13:02 -- setup/hugepages.sh@89 -- # local node 00:05:03.693 22:13:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.693 22:13:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.693 22:13:02 -- setup/hugepages.sh@92 -- # local surp 00:05:03.693 22:13:02 -- setup/hugepages.sh@93 -- # local resv 00:05:03.693 22:13:02 -- setup/hugepages.sh@94 -- # local anon 00:05:03.693 22:13:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.693 22:13:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.693 22:13:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.693 22:13:02 -- setup/common.sh@18 -- # local node= 00:05:03.694 22:13:02 -- setup/common.sh@19 -- # local var val 00:05:03.694 22:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.694 22:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.694 22:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.694 22:13:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.694 22:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.694 22:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6235160 kB' 'MemAvailable: 10552492 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1204952 kB' 'Inactive: 3369060 kB' 'Active(anon): 140176 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064776 kB' 'Inactive(file): 3367272 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149456 kB' 'Mapped: 73504 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298524 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91376 kB' 'KernelStack: 4492 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 636560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.694 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.695 22:13:02 -- setup/common.sh@33 -- # echo 0 00:05:03.695 22:13:02 -- setup/common.sh@33 -- # return 0 00:05:03.695 22:13:02 -- setup/hugepages.sh@97 -- # anon=0 00:05:03.695 22:13:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.695 22:13:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.695 22:13:02 -- setup/common.sh@18 -- # local node= 00:05:03.695 22:13:02 -- setup/common.sh@19 -- # local var val 00:05:03.695 22:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.695 22:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.695 22:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.695 22:13:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.695 22:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.695 22:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6235160 kB' 'MemAvailable: 10552492 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1205212 kB' 'Inactive: 3369060 kB' 'Active(anon): 140436 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064776 kB' 'Inactive(file): 3367272 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149716 kB' 'Mapped: 73504 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298524 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91376 kB' 'KernelStack: 4492 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 641432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.695 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.696 22:13:02 -- setup/common.sh@33 -- # echo 0 00:05:03.696 22:13:02 -- setup/common.sh@33 -- # return 0 00:05:03.696 22:13:02 -- setup/hugepages.sh@99 -- # surp=0 00:05:03.696 22:13:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.696 22:13:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.696 22:13:02 -- setup/common.sh@18 -- # local node= 00:05:03.696 22:13:02 -- setup/common.sh@19 -- # local var val 00:05:03.696 22:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.696 22:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.696 22:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.696 22:13:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.696 22:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.696 22:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6235408 kB' 'MemAvailable: 10552740 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1205004 kB' 'Inactive: 3369060 kB' 'Active(anon): 140228 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064776 kB' 'Inactive(file): 3367272 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149320 kB' 'Mapped: 73516 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298308 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91160 kB' 'KernelStack: 4464 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 636368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 22:13:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.697 22:13:02 -- setup/common.sh@33 -- # echo 0 00:05:03.697 22:13:02 -- setup/common.sh@33 -- # return 0 00:05:03.697 22:13:02 -- setup/hugepages.sh@100 -- # resv=0 00:05:03.697 nr_hugepages=512 00:05:03.697 22:13:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:03.697 22:13:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.697 resv_hugepages=0 00:05:03.697 surplus_hugepages=0 00:05:03.697 22:13:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.697 anon_hugepages=0 00:05:03.697 22:13:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.697 22:13:02 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.697 22:13:02 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:03.697 22:13:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.697 22:13:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.697 22:13:02 -- setup/common.sh@18 -- # local node= 00:05:03.697 22:13:02 -- setup/common.sh@19 -- # local var val 00:05:03.697 22:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.697 22:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.697 22:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.697 22:13:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.697 22:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.697 22:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6235668 kB' 'MemAvailable: 10553000 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1204744 kB' 'Inactive: 3369060 kB' 'Active(anon): 139968 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064776 kB' 'Inactive(file): 3367272 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149192 kB' 'Mapped: 73516 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298308 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91160 kB' 'KernelStack: 4532 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 635524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14388 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.698 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.698 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.699 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.700 22:13:02 -- setup/common.sh@33 -- # echo 512 00:05:03.700 22:13:02 -- setup/common.sh@33 -- # return 0 00:05:03.700 22:13:02 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.700 22:13:02 -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.700 22:13:02 -- setup/hugepages.sh@27 -- # local node 00:05:03.700 22:13:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.700 22:13:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.700 22:13:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.700 22:13:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.700 22:13:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.700 22:13:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.700 22:13:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.700 22:13:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.700 22:13:02 -- setup/common.sh@18 -- # local node=0 00:05:03.700 22:13:02 -- setup/common.sh@19 -- # local var val 00:05:03.700 22:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.700 22:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.700 22:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.700 22:13:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.700 22:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.700 22:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6235676 kB' 'MemUsed: 6015428 kB' 'Active: 1204752 kB' 'Inactive: 3369060 kB' 'Active(anon): 139976 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064776 kB' 'Inactive(file): 3367272 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 4442864 kB' 'Mapped: 73516 kB' 'AnonPages: 149440 kB' 'Shmem: 2616 kB' 'KernelStack: 4584 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207148 kB' 'Slab: 298308 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # continue 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 22:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 22:13:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 22:13:02 -- setup/common.sh@33 -- # echo 0 00:05:03.700 22:13:02 -- setup/common.sh@33 -- # return 0 00:05:03.701 22:13:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.701 22:13:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.701 22:13:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.701 22:13:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.701 node0=512 expecting 512 00:05:03.701 22:13:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.701 00:05:03.701 real 0m0.652s 00:05:03.701 user 0m0.281s 00:05:03.701 sys 0m0.406s 00:05:03.701 22:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.701 22:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.701 ************************************ 00:05:03.701 END TEST per_node_1G_alloc 00:05:03.701 ************************************ 00:05:03.701 22:13:02 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:03.701 22:13:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.701 22:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.701 22:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.701 ************************************ 00:05:03.701 START TEST even_2G_alloc 00:05:03.701 ************************************ 00:05:03.701 22:13:02 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:03.701 22:13:02 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:03.701 22:13:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.701 22:13:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.701 22:13:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.701 22:13:02 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.701 22:13:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.701 22:13:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.701 22:13:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.701 22:13:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.701 22:13:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.701 22:13:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:03.701 22:13:02 -- setup/hugepages.sh@83 -- # : 0 00:05:03.701 22:13:02 -- setup/hugepages.sh@84 -- # : 0 00:05:03.701 22:13:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.701 22:13:02 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:03.701 22:13:02 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:03.701 22:13:02 -- setup/hugepages.sh@153 -- # setup output 00:05:03.701 22:13:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.701 22:13:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.961 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.529 22:13:03 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:04.529 22:13:03 -- setup/hugepages.sh@89 -- # local node 00:05:04.529 22:13:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.529 22:13:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.529 22:13:03 -- setup/hugepages.sh@92 -- # local surp 00:05:04.529 22:13:03 -- setup/hugepages.sh@93 -- # local resv 00:05:04.529 22:13:03 -- setup/hugepages.sh@94 -- # local anon 00:05:04.529 22:13:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.529 22:13:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.529 22:13:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.529 22:13:03 -- setup/common.sh@18 -- # local node= 00:05:04.529 22:13:03 -- setup/common.sh@19 -- # local var val 00:05:04.529 22:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.529 22:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.529 22:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.529 22:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.529 22:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.529 22:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.529 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.529 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.529 22:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187156 kB' 'MemAvailable: 9504488 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1204960 kB' 'Inactive: 3369048 kB' 'Active(anon): 140172 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064788 kB' 'Inactive(file): 3367260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149072 kB' 'Mapped: 73528 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298280 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91132 kB' 'KernelStack: 4528 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 629872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.530 22:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.530 22:13:03 -- setup/common.sh@33 -- # echo 0 00:05:04.530 22:13:03 -- setup/common.sh@33 -- # return 0 00:05:04.530 22:13:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:04.530 22:13:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.530 22:13:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.530 22:13:03 -- setup/common.sh@18 -- # local node= 00:05:04.530 22:13:03 -- setup/common.sh@19 -- # local var val 00:05:04.530 22:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.530 22:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.530 22:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.530 22:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.530 22:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.530 22:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.530 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187156 kB' 'MemAvailable: 9504488 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1205220 kB' 'Inactive: 3369048 kB' 'Active(anon): 140432 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064788 kB' 'Inactive(file): 3367260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149332 kB' 'Mapped: 73528 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298280 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91132 kB' 'KernelStack: 4528 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 635100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.531 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.531 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.532 22:13:03 -- setup/common.sh@33 -- # echo 0 00:05:04.532 22:13:03 -- setup/common.sh@33 -- # return 0 00:05:04.532 22:13:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:04.532 22:13:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.532 22:13:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.532 22:13:03 -- setup/common.sh@18 -- # local node= 00:05:04.532 22:13:03 -- setup/common.sh@19 -- # local var val 00:05:04.532 22:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.532 22:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.532 22:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.532 22:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.532 22:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.532 22:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187416 kB' 'MemAvailable: 9504748 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1205220 kB' 'Inactive: 3369048 kB' 'Active(anon): 140432 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064788 kB' 'Inactive(file): 3367260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149204 kB' 'Mapped: 73528 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298280 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91132 kB' 'KernelStack: 4528 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 635100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.532 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.532 22:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.533 22:13:03 -- setup/common.sh@33 -- # echo 0 00:05:04.533 22:13:03 -- setup/common.sh@33 -- # return 0 00:05:04.533 22:13:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:04.533 22:13:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.533 nr_hugepages=1024 00:05:04.533 22:13:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.533 resv_hugepages=0 00:05:04.533 surplus_hugepages=0 00:05:04.533 22:13:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.533 anon_hugepages=0 00:05:04.533 22:13:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.533 22:13:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.533 22:13:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.533 22:13:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.533 22:13:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.533 22:13:03 -- setup/common.sh@18 -- # local node= 00:05:04.533 22:13:03 -- setup/common.sh@19 -- # local var val 00:05:04.533 22:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.533 22:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.533 22:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.533 22:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.533 22:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.533 22:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187936 kB' 'MemAvailable: 9505268 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1205220 kB' 'Inactive: 3369048 kB' 'Active(anon): 140432 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064788 kB' 'Inactive(file): 3367260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 149724 kB' 'Mapped: 73528 kB' 'Shmem: 2616 kB' 'KReclaimable: 207148 kB' 'Slab: 298280 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91132 kB' 'KernelStack: 4596 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 639940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.533 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.533 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.534 22:13:03 -- setup/common.sh@33 -- # echo 1024 00:05:04.534 22:13:03 -- setup/common.sh@33 -- # return 0 00:05:04.534 22:13:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.534 22:13:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.534 22:13:03 -- setup/hugepages.sh@27 -- # local node 00:05:04.534 22:13:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.534 22:13:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.534 22:13:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.534 22:13:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.534 22:13:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.534 22:13:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.534 22:13:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.534 22:13:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.534 22:13:03 -- setup/common.sh@18 -- # local node=0 00:05:04.534 22:13:03 -- setup/common.sh@19 -- # local var val 00:05:04.534 22:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.534 22:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.534 22:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.534 22:13:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.534 22:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.534 22:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.534 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.534 22:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5187936 kB' 'MemUsed: 7063168 kB' 'Active: 1204708 kB' 'Inactive: 3369048 kB' 'Active(anon): 139920 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064788 kB' 'Inactive(file): 3367260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 4442864 kB' 'Mapped: 73528 kB' 'AnonPages: 149544 kB' 'Shmem: 2616 kB' 'KernelStack: 4632 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207148 kB' 'Slab: 298280 kB' 'SReclaimable: 207148 kB' 'SUnreclaim: 91132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # continue 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.535 22:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.535 22:13:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.535 22:13:03 -- setup/common.sh@33 -- # echo 0 00:05:04.535 22:13:03 -- setup/common.sh@33 -- # return 0 00:05:04.535 22:13:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.535 22:13:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.535 22:13:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.535 22:13:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.535 22:13:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.535 node0=1024 expecting 1024 00:05:04.535 22:13:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.535 00:05:04.535 real 0m0.927s 00:05:04.535 user 0m0.255s 00:05:04.535 sys 0m0.707s 00:05:04.535 22:13:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.535 22:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.535 ************************************ 00:05:04.535 END TEST even_2G_alloc 00:05:04.535 ************************************ 00:05:04.795 22:13:03 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:04.795 22:13:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.795 22:13:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.795 22:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.795 ************************************ 00:05:04.795 START TEST odd_alloc 00:05:04.795 ************************************ 00:05:04.795 22:13:03 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:04.795 22:13:03 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:04.795 22:13:03 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:04.795 22:13:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.795 22:13:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.795 22:13:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:04.795 22:13:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.795 22:13:03 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:04.795 22:13:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.795 22:13:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:04.795 22:13:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.795 22:13:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.795 22:13:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.795 22:13:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.795 22:13:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.795 22:13:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.795 22:13:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:04.795 22:13:03 -- setup/hugepages.sh@83 -- # : 0 00:05:04.795 22:13:03 -- setup/hugepages.sh@84 -- # : 0 00:05:04.795 22:13:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.795 22:13:03 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:04.795 22:13:03 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:04.795 22:13:03 -- setup/hugepages.sh@160 -- # setup output 00:05:04.795 22:13:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.795 22:13:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.053 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.622 22:13:04 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:05.622 22:13:04 -- setup/hugepages.sh@89 -- # local node 00:05:05.622 22:13:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.622 22:13:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.622 22:13:04 -- setup/hugepages.sh@92 -- # local surp 00:05:05.622 22:13:04 -- setup/hugepages.sh@93 -- # local resv 00:05:05.622 22:13:04 -- setup/hugepages.sh@94 -- # local anon 00:05:05.622 22:13:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.622 22:13:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.622 22:13:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.622 22:13:04 -- setup/common.sh@18 -- # local node= 00:05:05.622 22:13:04 -- setup/common.sh@19 -- # local var val 00:05:05.622 22:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.622 22:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.622 22:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.622 22:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.622 22:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.622 22:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5198852 kB' 'MemAvailable: 9516200 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1191040 kB' 'Inactive: 3369028 kB' 'Active(anon): 126232 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064808 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 134848 kB' 'Mapped: 72852 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 298260 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 91096 kB' 'KernelStack: 4512 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 603404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.622 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.622 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.623 22:13:04 -- setup/common.sh@33 -- # echo 0 00:05:05.623 22:13:04 -- setup/common.sh@33 -- # return 0 00:05:05.623 22:13:04 -- setup/hugepages.sh@97 -- # anon=0 00:05:05.623 22:13:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.623 22:13:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.623 22:13:04 -- setup/common.sh@18 -- # local node= 00:05:05.623 22:13:04 -- setup/common.sh@19 -- # local var val 00:05:05.623 22:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.623 22:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.623 22:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.623 22:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.623 22:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.623 22:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5199328 kB' 'MemAvailable: 9516676 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1190888 kB' 'Inactive: 3369028 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064808 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 134816 kB' 'Mapped: 72852 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 298012 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90848 kB' 'KernelStack: 4480 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 608284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.623 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.623 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.624 22:13:04 -- setup/common.sh@33 -- # echo 0 00:05:05.624 22:13:04 -- setup/common.sh@33 -- # return 0 00:05:05.624 22:13:04 -- setup/hugepages.sh@99 -- # surp=0 00:05:05.624 22:13:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.624 22:13:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.624 22:13:04 -- setup/common.sh@18 -- # local node= 00:05:05.624 22:13:04 -- setup/common.sh@19 -- # local var val 00:05:05.624 22:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.624 22:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.624 22:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.624 22:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.624 22:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.624 22:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5199328 kB' 'MemAvailable: 9516676 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1191148 kB' 'Inactive: 3369028 kB' 'Active(anon): 126340 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064808 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135336 kB' 'Mapped: 72852 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 298012 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90848 kB' 'KernelStack: 4480 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 608284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.624 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.624 22:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.625 22:13:04 -- setup/common.sh@33 -- # echo 0 00:05:05.625 22:13:04 -- setup/common.sh@33 -- # return 0 00:05:05.625 22:13:04 -- setup/hugepages.sh@100 -- # resv=0 00:05:05.625 22:13:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:05.625 nr_hugepages=1025 00:05:05.625 resv_hugepages=0 00:05:05.625 22:13:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.625 surplus_hugepages=0 00:05:05.625 22:13:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.625 anon_hugepages=0 00:05:05.625 22:13:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.625 22:13:04 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:05.625 22:13:04 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:05.625 22:13:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.625 22:13:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.625 22:13:04 -- setup/common.sh@18 -- # local node= 00:05:05.625 22:13:04 -- setup/common.sh@19 -- # local var val 00:05:05.625 22:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.625 22:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.625 22:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.625 22:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.625 22:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.625 22:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5199992 kB' 'MemAvailable: 9517340 kB' 'Buffers: 37576 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1190812 kB' 'Inactive: 3369028 kB' 'Active(anon): 126004 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064808 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 135232 kB' 'Mapped: 72852 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297796 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90632 kB' 'KernelStack: 4332 kB' 'PageTables: 3312 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 607264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.625 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.625 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.626 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.626 22:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.627 22:13:04 -- setup/common.sh@33 -- # echo 1025 00:05:05.627 22:13:04 -- setup/common.sh@33 -- # return 0 00:05:05.627 22:13:04 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:05.627 22:13:04 -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.627 22:13:04 -- setup/hugepages.sh@27 -- # local node 00:05:05.627 22:13:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.627 22:13:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:05.627 22:13:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.627 22:13:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.627 22:13:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.627 22:13:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.627 22:13:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.627 22:13:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.627 22:13:04 -- setup/common.sh@18 -- # local node=0 00:05:05.627 22:13:04 -- setup/common.sh@19 -- # local var val 00:05:05.627 22:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.627 22:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.627 22:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.627 22:13:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.627 22:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.627 22:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5200148 kB' 'MemUsed: 7050956 kB' 'Active: 1190728 kB' 'Inactive: 3369028 kB' 'Active(anon): 125920 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064808 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 4442864 kB' 'Mapped: 72828 kB' 'AnonPages: 135352 kB' 'Shmem: 2616 kB' 'KernelStack: 4368 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207164 kB' 'Slab: 297796 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.627 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.627 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # continue 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.628 22:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.628 22:13:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.628 22:13:04 -- setup/common.sh@33 -- # echo 0 00:05:05.628 22:13:04 -- setup/common.sh@33 -- # return 0 00:05:05.628 22:13:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.628 22:13:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.628 22:13:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.628 node0=1025 expecting 1025 00:05:05.628 22:13:04 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:05.628 22:13:04 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:05.628 00:05:05.628 real 0m0.888s 00:05:05.628 user 0m0.270s 00:05:05.628 sys 0m0.652s 00:05:05.628 22:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.628 22:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.628 ************************************ 00:05:05.628 END TEST odd_alloc 00:05:05.628 ************************************ 00:05:05.628 22:13:04 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:05.628 22:13:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.628 22:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.628 22:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.628 ************************************ 00:05:05.628 START TEST custom_alloc 00:05:05.628 ************************************ 00:05:05.628 22:13:04 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:05.628 22:13:04 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:05.628 22:13:04 -- setup/hugepages.sh@169 -- # local node 00:05:05.628 22:13:04 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:05.628 22:13:04 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:05.628 22:13:04 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:05.628 22:13:04 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:05.628 22:13:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.628 22:13:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.628 22:13:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.628 22:13:04 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:05.628 22:13:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.628 22:13:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.628 22:13:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.628 22:13:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.628 22:13:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.628 22:13:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:05.628 22:13:04 -- setup/hugepages.sh@83 -- # : 0 00:05:05.628 22:13:04 -- setup/hugepages.sh@84 -- # : 0 00:05:05.628 22:13:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:05.628 22:13:04 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.628 22:13:04 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.628 22:13:04 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:05.628 22:13:04 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:05.628 22:13:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.628 22:13:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.628 22:13:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.628 22:13:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.628 22:13:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.628 22:13:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:05.628 22:13:04 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.628 22:13:04 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.628 22:13:04 -- setup/hugepages.sh@78 -- # return 0 00:05:05.628 22:13:04 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:05.628 22:13:04 -- setup/hugepages.sh@187 -- # setup output 00:05:05.628 22:13:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.628 22:13:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.887 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.148 22:13:05 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:06.148 22:13:05 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:06.148 22:13:05 -- setup/hugepages.sh@89 -- # local node 00:05:06.148 22:13:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.148 22:13:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.148 22:13:05 -- setup/hugepages.sh@92 -- # local surp 00:05:06.148 22:13:05 -- setup/hugepages.sh@93 -- # local resv 00:05:06.148 22:13:05 -- setup/hugepages.sh@94 -- # local anon 00:05:06.148 22:13:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.148 22:13:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.148 22:13:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.148 22:13:05 -- setup/common.sh@18 -- # local node= 00:05:06.148 22:13:05 -- setup/common.sh@19 -- # local var val 00:05:06.148 22:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.148 22:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.148 22:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.148 22:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.148 22:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.148 22:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6252092 kB' 'MemAvailable: 10569444 kB' 'Buffers: 37576 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190364 kB' 'Inactive: 3369028 kB' 'Active(anon): 125552 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064812 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 135056 kB' 'Mapped: 72996 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297760 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90596 kB' 'KernelStack: 4260 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 598372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14036 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.148 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.148 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.149 22:13:05 -- setup/common.sh@33 -- # echo 0 00:05:06.149 22:13:05 -- setup/common.sh@33 -- # return 0 00:05:06.149 22:13:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.149 22:13:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.149 22:13:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.149 22:13:05 -- setup/common.sh@18 -- # local node= 00:05:06.149 22:13:05 -- setup/common.sh@19 -- # local var val 00:05:06.149 22:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.149 22:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.149 22:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.149 22:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.149 22:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.149 22:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6252312 kB' 'MemAvailable: 10569664 kB' 'Buffers: 37576 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190476 kB' 'Inactive: 3369028 kB' 'Active(anon): 125664 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064812 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 135412 kB' 'Mapped: 72980 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297760 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90596 kB' 'KernelStack: 4284 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 598372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.149 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.149 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.150 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.150 22:13:05 -- setup/common.sh@33 -- # echo 0 00:05:06.150 22:13:05 -- setup/common.sh@33 -- # return 0 00:05:06.150 22:13:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.150 22:13:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.150 22:13:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.150 22:13:05 -- setup/common.sh@18 -- # local node= 00:05:06.150 22:13:05 -- setup/common.sh@19 -- # local var val 00:05:06.150 22:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.150 22:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.150 22:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.150 22:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.150 22:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.150 22:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.150 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6252560 kB' 'MemAvailable: 10569912 kB' 'Buffers: 37576 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190440 kB' 'Inactive: 3369028 kB' 'Active(anon): 125628 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064812 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 134668 kB' 'Mapped: 72780 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297756 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90592 kB' 'KernelStack: 4268 kB' 'PageTables: 3112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 598372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.151 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.151 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.152 22:13:05 -- setup/common.sh@33 -- # echo 0 00:05:06.152 22:13:05 -- setup/common.sh@33 -- # return 0 00:05:06.152 22:13:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.152 nr_hugepages=512 00:05:06.152 22:13:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:06.152 resv_hugepages=0 00:05:06.152 22:13:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.152 surplus_hugepages=0 00:05:06.152 22:13:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.152 anon_hugepages=0 00:05:06.152 22:13:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.152 22:13:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:06.152 22:13:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:06.152 22:13:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.152 22:13:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.152 22:13:05 -- setup/common.sh@18 -- # local node= 00:05:06.152 22:13:05 -- setup/common.sh@19 -- # local var val 00:05:06.152 22:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.152 22:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.152 22:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.152 22:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.152 22:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.152 22:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6252568 kB' 'MemAvailable: 10569920 kB' 'Buffers: 37576 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190316 kB' 'Inactive: 3369028 kB' 'Active(anon): 125504 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064812 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 134984 kB' 'Mapped: 72780 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297764 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90600 kB' 'KernelStack: 4336 kB' 'PageTables: 3120 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 602284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.152 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.152 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.153 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.153 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.153 22:13:05 -- setup/common.sh@33 -- # echo 512 00:05:06.153 22:13:05 -- setup/common.sh@33 -- # return 0 00:05:06.153 22:13:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:06.153 22:13:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.153 22:13:05 -- setup/hugepages.sh@27 -- # local node 00:05:06.153 22:13:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.153 22:13:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.153 22:13:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.153 22:13:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.153 22:13:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.153 22:13:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.153 22:13:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.153 22:13:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.153 22:13:05 -- setup/common.sh@18 -- # local node=0 00:05:06.153 22:13:05 -- setup/common.sh@19 -- # local var val 00:05:06.153 22:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.153 22:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.153 22:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.153 22:13:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.153 22:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.153 22:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6252244 kB' 'MemUsed: 5998860 kB' 'Active: 1190576 kB' 'Inactive: 3369028 kB' 'Active(anon): 125764 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064812 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4442868 kB' 'Mapped: 72780 kB' 'AnonPages: 135116 kB' 'Shmem: 2616 kB' 'KernelStack: 4336 kB' 'PageTables: 3120 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207164 kB' 'Slab: 297764 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # continue 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.413 22:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.413 22:13:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.413 22:13:05 -- setup/common.sh@33 -- # echo 0 00:05:06.413 22:13:05 -- setup/common.sh@33 -- # return 0 00:05:06.413 22:13:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.413 22:13:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.413 22:13:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.413 22:13:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.413 node0=512 expecting 512 00:05:06.413 22:13:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.413 22:13:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:06.413 00:05:06.413 real 0m0.659s 00:05:06.413 user 0m0.243s 00:05:06.413 sys 0m0.447s 00:05:06.413 22:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.413 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:06.413 ************************************ 00:05:06.413 END TEST custom_alloc 00:05:06.413 ************************************ 00:05:06.413 22:13:05 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:06.413 22:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.413 22:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.413 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:06.413 ************************************ 00:05:06.413 START TEST no_shrink_alloc 00:05:06.413 ************************************ 00:05:06.414 22:13:05 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:06.414 22:13:05 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:06.414 22:13:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:06.414 22:13:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:06.414 22:13:05 -- setup/hugepages.sh@51 -- # shift 00:05:06.414 22:13:05 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:06.414 22:13:05 -- setup/hugepages.sh@52 -- # local node_ids 00:05:06.414 22:13:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.414 22:13:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:06.414 22:13:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:06.414 22:13:05 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:06.414 22:13:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.414 22:13:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.414 22:13:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:06.414 22:13:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.414 22:13:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.414 22:13:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:06.414 22:13:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:06.414 22:13:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:06.414 22:13:05 -- setup/hugepages.sh@73 -- # return 0 00:05:06.414 22:13:05 -- setup/hugepages.sh@198 -- # setup output 00:05:06.414 22:13:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.414 22:13:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:06.672 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.243 22:13:06 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:07.243 22:13:06 -- setup/hugepages.sh@89 -- # local node 00:05:07.243 22:13:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.243 22:13:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.243 22:13:06 -- setup/hugepages.sh@92 -- # local surp 00:05:07.243 22:13:06 -- setup/hugepages.sh@93 -- # local resv 00:05:07.243 22:13:06 -- setup/hugepages.sh@94 -- # local anon 00:05:07.243 22:13:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.243 22:13:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.243 22:13:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.243 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.243 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.243 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.243 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.243 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.243 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.243 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.243 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5204416 kB' 'MemAvailable: 9521772 kB' 'Buffers: 37584 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1190172 kB' 'Inactive: 3369024 kB' 'Active(anon): 125352 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367236 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135348 kB' 'Mapped: 73028 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297784 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90620 kB' 'KernelStack: 4284 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 599140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14020 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.243 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.243 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.244 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.244 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.244 22:13:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:07.244 22:13:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.244 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.244 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.244 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.244 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.244 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.244 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.244 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.244 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.244 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5204416 kB' 'MemAvailable: 9521772 kB' 'Buffers: 37584 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1190432 kB' 'Inactive: 3369024 kB' 'Active(anon): 125612 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367236 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135608 kB' 'Mapped: 73028 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297784 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90620 kB' 'KernelStack: 4284 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 592544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14020 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.244 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.244 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.245 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.246 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.246 22:13:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:07.246 22:13:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.246 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.246 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.246 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.246 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.246 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.246 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.246 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.246 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.246 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5204676 kB' 'MemAvailable: 9522032 kB' 'Buffers: 37584 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1190692 kB' 'Inactive: 3369024 kB' 'Active(anon): 125872 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367236 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135480 kB' 'Mapped: 73028 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297784 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90620 kB' 'KernelStack: 4284 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 597328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14020 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.247 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.247 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.247 22:13:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:07.247 22:13:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.247 nr_hugepages=1024 00:05:07.248 22:13:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.248 resv_hugepages=0 00:05:07.248 22:13:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.248 surplus_hugepages=0 00:05:07.248 anon_hugepages=0 00:05:07.248 22:13:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.248 22:13:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.248 22:13:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.248 22:13:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.248 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.248 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.248 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.248 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.248 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.248 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.248 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.248 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.248 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5204660 kB' 'MemAvailable: 9522016 kB' 'Buffers: 37584 kB' 'Cached: 4405288 kB' 'SwapCached: 0 kB' 'Active: 1190444 kB' 'Inactive: 3369024 kB' 'Active(anon): 125624 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367236 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 134988 kB' 'Mapped: 73076 kB' 'Shmem: 2616 kB' 'KReclaimable: 207164 kB' 'Slab: 297800 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90636 kB' 'KernelStack: 4320 kB' 'PageTables: 3128 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 596464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.248 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.248 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.249 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.249 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.249 22:13:06 -- setup/common.sh@33 -- # echo 1024 00:05:07.249 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.249 22:13:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.249 22:13:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.249 22:13:06 -- setup/hugepages.sh@27 -- # local node 00:05:07.249 22:13:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.249 22:13:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.249 22:13:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.249 22:13:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.249 22:13:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.249 22:13:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.249 22:13:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.249 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.249 22:13:06 -- setup/common.sh@18 -- # local node=0 00:05:07.249 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.249 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.249 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.250 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.250 22:13:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.250 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.250 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5204352 kB' 'MemUsed: 7046752 kB' 'Active: 1190444 kB' 'Inactive: 3369024 kB' 'Active(anon): 125624 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367236 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 4442872 kB' 'Mapped: 73076 kB' 'AnonPages: 135508 kB' 'Shmem: 2616 kB' 'KernelStack: 4320 kB' 'PageTables: 3128 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207164 kB' 'Slab: 297800 kB' 'SReclaimable: 207164 kB' 'SUnreclaim: 90636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.250 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.250 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.251 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.251 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.251 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.251 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.251 22:13:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.251 22:13:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.251 22:13:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.251 22:13:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.251 22:13:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.251 node0=1024 expecting 1024 00:05:07.251 22:13:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.251 22:13:06 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:07.251 22:13:06 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:07.251 22:13:06 -- setup/hugepages.sh@202 -- # setup output 00:05:07.251 22:13:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.251 22:13:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:07.512 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.512 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:07.512 22:13:06 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:07.512 22:13:06 -- setup/hugepages.sh@89 -- # local node 00:05:07.512 22:13:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.512 22:13:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.512 22:13:06 -- setup/hugepages.sh@92 -- # local surp 00:05:07.512 22:13:06 -- setup/hugepages.sh@93 -- # local resv 00:05:07.512 22:13:06 -- setup/hugepages.sh@94 -- # local anon 00:05:07.512 22:13:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.512 22:13:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.512 22:13:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.512 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.512 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.512 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.512 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.512 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.512 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.512 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.512 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5203424 kB' 'MemAvailable: 9520800 kB' 'Buffers: 37584 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190720 kB' 'Inactive: 3369028 kB' 'Active(anon): 125900 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135996 kB' 'Mapped: 72620 kB' 'Shmem: 2616 kB' 'KReclaimable: 207180 kB' 'Slab: 298168 kB' 'SReclaimable: 207180 kB' 'SUnreclaim: 90988 kB' 'KernelStack: 4356 kB' 'PageTables: 2912 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 594324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.512 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.512 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.513 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.513 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.513 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.513 22:13:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:07.513 22:13:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.513 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.513 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.513 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.513 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.513 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.513 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.513 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.513 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.513 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.513 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5203624 kB' 'MemAvailable: 9521000 kB' 'Buffers: 37584 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190624 kB' 'Inactive: 3369028 kB' 'Active(anon): 125804 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135452 kB' 'Mapped: 72668 kB' 'Shmem: 2616 kB' 'KReclaimable: 207180 kB' 'Slab: 297916 kB' 'SReclaimable: 207180 kB' 'SUnreclaim: 90736 kB' 'KernelStack: 4344 kB' 'PageTables: 2812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 594324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.514 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.514 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.515 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.515 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.515 22:13:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:07.515 22:13:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.515 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.515 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.515 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.515 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.515 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.515 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.515 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.515 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.515 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5203624 kB' 'MemAvailable: 9521000 kB' 'Buffers: 37584 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190884 kB' 'Inactive: 3369028 kB' 'Active(anon): 126064 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135972 kB' 'Mapped: 72668 kB' 'Shmem: 2616 kB' 'KReclaimable: 207180 kB' 'Slab: 297916 kB' 'SReclaimable: 207180 kB' 'SUnreclaim: 90736 kB' 'KernelStack: 4344 kB' 'PageTables: 2812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 593556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.515 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.515 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.516 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.516 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.517 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.517 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.517 22:13:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:07.517 22:13:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.517 nr_hugepages=1024 00:05:07.517 22:13:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.517 resv_hugepages=0 00:05:07.517 surplus_hugepages=0 00:05:07.517 22:13:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.517 anon_hugepages=0 00:05:07.517 22:13:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.517 22:13:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.517 22:13:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.517 22:13:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.517 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.517 22:13:06 -- setup/common.sh@18 -- # local node= 00:05:07.517 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.517 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.517 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.517 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.517 22:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.517 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.517 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5203624 kB' 'MemAvailable: 9521000 kB' 'Buffers: 37584 kB' 'Cached: 4405292 kB' 'SwapCached: 0 kB' 'Active: 1190884 kB' 'Inactive: 3369028 kB' 'Active(anon): 126064 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 135456 kB' 'Mapped: 72668 kB' 'Shmem: 2616 kB' 'KReclaimable: 207180 kB' 'Slab: 297916 kB' 'SReclaimable: 207180 kB' 'SUnreclaim: 90736 kB' 'KernelStack: 4412 kB' 'PageTables: 2812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 592456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.517 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.517 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.518 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.518 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.519 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.519 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.778 22:13:06 -- setup/common.sh@33 -- # echo 1024 00:05:07.778 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.778 22:13:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.778 22:13:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.778 22:13:06 -- setup/hugepages.sh@27 -- # local node 00:05:07.778 22:13:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.778 22:13:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.778 22:13:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.778 22:13:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.778 22:13:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.778 22:13:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.778 22:13:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.778 22:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.778 22:13:06 -- setup/common.sh@18 -- # local node=0 00:05:07.778 22:13:06 -- setup/common.sh@19 -- # local var val 00:05:07.778 22:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.778 22:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.778 22:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.778 22:13:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.778 22:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.778 22:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5204152 kB' 'MemUsed: 7046952 kB' 'Active: 1190788 kB' 'Inactive: 3369028 kB' 'Active(anon): 125968 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064820 kB' 'Inactive(file): 3367240 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 4442876 kB' 'Mapped: 72660 kB' 'AnonPages: 134972 kB' 'Shmem: 2616 kB' 'KernelStack: 4384 kB' 'PageTables: 3068 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207180 kB' 'Slab: 297948 kB' 'SReclaimable: 207180 kB' 'SUnreclaim: 90768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.778 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.778 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # continue 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.779 22:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.779 22:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.779 22:13:06 -- setup/common.sh@33 -- # echo 0 00:05:07.779 22:13:06 -- setup/common.sh@33 -- # return 0 00:05:07.779 22:13:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.779 22:13:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.779 22:13:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.779 22:13:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.779 node0=1024 expecting 1024 00:05:07.779 22:13:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.779 22:13:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.779 00:05:07.779 real 0m1.316s 00:05:07.779 user 0m0.506s 00:05:07.779 sys 0m0.878s 00:05:07.779 22:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.779 22:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.779 ************************************ 00:05:07.779 END TEST no_shrink_alloc 00:05:07.779 ************************************ 00:05:07.779 22:13:06 -- setup/hugepages.sh@217 -- # clear_hp 00:05:07.779 22:13:06 -- setup/hugepages.sh@37 -- # local node hp 00:05:07.779 22:13:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.779 22:13:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.779 22:13:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:07.779 22:13:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.779 22:13:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:07.779 22:13:06 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.779 22:13:06 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.779 00:05:07.779 real 0m5.987s 00:05:07.779 user 0m2.081s 00:05:07.779 sys 0m4.049s 00:05:07.779 22:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.779 ************************************ 00:05:07.779 END TEST hugepages 00:05:07.779 ************************************ 00:05:07.779 22:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.779 22:13:06 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:07.779 22:13:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.779 22:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.779 22:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.779 ************************************ 00:05:07.779 START TEST driver 00:05:07.779 ************************************ 00:05:07.779 22:13:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:07.779 * Looking for test storage... 00:05:07.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.780 22:13:06 -- setup/driver.sh@68 -- # setup reset 00:05:07.780 22:13:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.780 22:13:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.345 22:13:07 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:08.345 22:13:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.345 22:13:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.345 22:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.345 ************************************ 00:05:08.345 START TEST guess_driver 00:05:08.345 ************************************ 00:05:08.345 22:13:07 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:08.345 22:13:07 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:08.345 22:13:07 -- setup/driver.sh@47 -- # local fail=0 00:05:08.345 22:13:07 -- setup/driver.sh@49 -- # pick_driver 00:05:08.345 22:13:07 -- setup/driver.sh@36 -- # vfio 00:05:08.345 22:13:07 -- setup/driver.sh@21 -- # local iommu_grups 00:05:08.345 22:13:07 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:08.345 22:13:07 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:08.345 22:13:07 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:08.345 22:13:07 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:08.345 22:13:07 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:08.345 22:13:07 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:08.345 22:13:07 -- setup/driver.sh@32 -- # return 1 00:05:08.345 22:13:07 -- setup/driver.sh@38 -- # uio 00:05:08.345 22:13:07 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:05:08.345 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:08.345 22:13:07 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:08.345 Looking for driver=uio_pci_generic 00:05:08.345 22:13:07 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:08.345 22:13:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.345 22:13:07 -- setup/driver.sh@45 -- # setup output config 00:05:08.345 22:13:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.345 22:13:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.604 22:13:07 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:08.604 22:13:07 -- setup/driver.sh@58 -- # continue 00:05:08.604 22:13:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.604 22:13:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.604 22:13:07 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:08.604 22:13:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.980 22:13:08 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:09.980 22:13:08 -- setup/driver.sh@65 -- # setup reset 00:05:09.980 22:13:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.980 22:13:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.238 00:05:10.238 real 0m1.916s 00:05:10.238 user 0m0.425s 00:05:10.238 sys 0m1.467s 00:05:10.238 22:13:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.238 ************************************ 00:05:10.238 END TEST guess_driver 00:05:10.238 ************************************ 00:05:10.238 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.238 00:05:10.238 real 0m2.441s 00:05:10.238 user 0m0.694s 00:05:10.238 sys 0m1.722s 00:05:10.238 22:13:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.238 ************************************ 00:05:10.238 END TEST driver 00:05:10.238 ************************************ 00:05:10.238 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.238 22:13:09 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.238 22:13:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.238 22:13:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.238 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.238 ************************************ 00:05:10.238 START TEST devices 00:05:10.238 ************************************ 00:05:10.238 22:13:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.238 * Looking for test storage... 00:05:10.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.238 22:13:09 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.238 22:13:09 -- setup/devices.sh@192 -- # setup reset 00:05:10.238 22:13:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.238 22:13:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.805 22:13:09 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:10.805 22:13:09 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:10.805 22:13:09 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:10.805 22:13:09 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:10.805 22:13:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.805 22:13:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:10.805 22:13:09 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:10.805 22:13:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.805 22:13:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.805 22:13:09 -- setup/devices.sh@196 -- # blocks=() 00:05:10.805 22:13:09 -- setup/devices.sh@196 -- # declare -a blocks 00:05:10.805 22:13:09 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:10.805 22:13:09 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:10.805 22:13:09 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:10.805 22:13:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.805 22:13:09 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:10.805 22:13:09 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:10.805 22:13:09 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:10.805 22:13:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:10.805 22:13:09 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:10.805 22:13:09 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:10.805 22:13:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:10.805 No valid GPT data, bailing 00:05:10.805 22:13:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.805 22:13:09 -- scripts/common.sh@393 -- # pt= 00:05:10.805 22:13:09 -- scripts/common.sh@394 -- # return 1 00:05:10.805 22:13:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:10.805 22:13:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:10.805 22:13:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:10.805 22:13:09 -- setup/common.sh@80 -- # echo 5368709120 00:05:10.805 22:13:09 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:10.805 22:13:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.805 22:13:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:10.805 22:13:09 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:10.805 22:13:09 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:10.805 22:13:09 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:10.805 22:13:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.805 22:13:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.805 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.805 ************************************ 00:05:10.805 START TEST nvme_mount 00:05:10.805 ************************************ 00:05:10.805 22:13:09 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:10.805 22:13:09 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:10.805 22:13:09 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:10.805 22:13:09 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.805 22:13:09 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:10.805 22:13:09 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:10.805 22:13:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.805 22:13:09 -- setup/common.sh@40 -- # local part_no=1 00:05:10.805 22:13:09 -- setup/common.sh@41 -- # local size=1073741824 00:05:10.805 22:13:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.805 22:13:09 -- setup/common.sh@44 -- # parts=() 00:05:10.805 22:13:09 -- setup/common.sh@44 -- # local parts 00:05:10.805 22:13:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.805 22:13:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.805 22:13:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.805 22:13:09 -- setup/common.sh@46 -- # (( part++ )) 00:05:10.805 22:13:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.805 22:13:09 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:10.805 22:13:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:10.805 22:13:09 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.180 Creating new GPT entries in memory. 00:05:12.180 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.180 other utilities. 00:05:12.180 22:13:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.180 22:13:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.180 22:13:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.180 22:13:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.180 22:13:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:13.177 Creating new GPT entries in memory. 00:05:13.177 The operation has completed successfully. 00:05:13.177 22:13:11 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.177 22:13:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.177 22:13:11 -- setup/common.sh@62 -- # wait 98295 00:05:13.177 22:13:11 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.177 22:13:11 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:13.177 22:13:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.177 22:13:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:13.177 22:13:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:13.177 22:13:11 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.177 22:13:11 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.177 22:13:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:13.177 22:13:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:13.177 22:13:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.177 22:13:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.177 22:13:11 -- setup/devices.sh@53 -- # local found=0 00:05:13.177 22:13:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.177 22:13:11 -- setup/devices.sh@56 -- # : 00:05:13.177 22:13:11 -- setup/devices.sh@59 -- # local pci status 00:05:13.177 22:13:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.177 22:13:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:13.177 22:13:11 -- setup/devices.sh@47 -- # setup output config 00:05:13.177 22:13:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.177 22:13:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.177 22:13:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:13.177 22:13:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:13.177 22:13:12 -- setup/devices.sh@63 -- # found=1 00:05:13.177 22:13:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.177 22:13:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:13.177 22:13:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.177 22:13:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:13.177 22:13:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.553 22:13:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.553 22:13:13 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:14.553 22:13:13 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.553 22:13:13 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.553 22:13:13 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.553 22:13:13 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:14.553 22:13:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.553 22:13:13 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.553 22:13:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.553 22:13:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.553 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.553 22:13:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.553 22:13:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.553 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:14.553 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:14.553 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.553 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.553 22:13:13 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:14.553 22:13:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:14.553 22:13:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.553 22:13:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:14.553 22:13:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:14.553 22:13:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.553 22:13:13 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.553 22:13:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:14.553 22:13:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:14.553 22:13:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.553 22:13:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.553 22:13:13 -- setup/devices.sh@53 -- # local found=0 00:05:14.553 22:13:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.553 22:13:13 -- setup/devices.sh@56 -- # : 00:05:14.553 22:13:13 -- setup/devices.sh@59 -- # local pci status 00:05:14.553 22:13:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.553 22:13:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:14.553 22:13:13 -- setup/devices.sh@47 -- # setup output config 00:05:14.553 22:13:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.553 22:13:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.811 22:13:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.811 22:13:13 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.811 22:13:13 -- setup/devices.sh@63 -- # found=1 00:05:14.811 22:13:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.811 22:13:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.811 22:13:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.811 22:13:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.811 22:13:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.185 22:13:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.185 22:13:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:16.185 22:13:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.185 22:13:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.185 22:13:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.185 22:13:14 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.185 22:13:14 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:16.185 22:13:14 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:16.185 22:13:14 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:16.185 22:13:14 -- setup/devices.sh@50 -- # local mount_point= 00:05:16.185 22:13:14 -- setup/devices.sh@51 -- # local test_file= 00:05:16.185 22:13:14 -- setup/devices.sh@53 -- # local found=0 00:05:16.185 22:13:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:16.185 22:13:14 -- setup/devices.sh@59 -- # local pci status 00:05:16.185 22:13:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.185 22:13:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:16.185 22:13:14 -- setup/devices.sh@47 -- # setup output config 00:05:16.185 22:13:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.185 22:13:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.185 22:13:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.185 22:13:15 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:16.185 22:13:15 -- setup/devices.sh@63 -- # found=1 00:05:16.185 22:13:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.185 22:13:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.185 22:13:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.185 22:13:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.185 22:13:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.563 22:13:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.563 22:13:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.563 22:13:16 -- setup/devices.sh@68 -- # return 0 00:05:17.563 22:13:16 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.563 22:13:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.563 22:13:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.563 22:13:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.563 22:13:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.563 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.563 00:05:17.563 real 0m6.572s 00:05:17.563 user 0m0.751s 00:05:17.563 sys 0m3.694s 00:05:17.563 22:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.563 22:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:17.563 ************************************ 00:05:17.563 END TEST nvme_mount 00:05:17.563 ************************************ 00:05:17.563 22:13:16 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.563 22:13:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.563 22:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.563 22:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:17.563 ************************************ 00:05:17.563 START TEST dm_mount 00:05:17.563 ************************************ 00:05:17.563 22:13:16 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:17.563 22:13:16 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.563 22:13:16 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.563 22:13:16 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.563 22:13:16 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.563 22:13:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.563 22:13:16 -- setup/common.sh@40 -- # local part_no=2 00:05:17.563 22:13:16 -- setup/common.sh@41 -- # local size=1073741824 00:05:17.563 22:13:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.563 22:13:16 -- setup/common.sh@44 -- # parts=() 00:05:17.563 22:13:16 -- setup/common.sh@44 -- # local parts 00:05:17.563 22:13:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.563 22:13:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.563 22:13:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.563 22:13:16 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.563 22:13:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.563 22:13:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.563 22:13:16 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.563 22:13:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.563 22:13:16 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:17.563 22:13:16 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.563 22:13:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.500 Creating new GPT entries in memory. 00:05:18.500 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.500 other utilities. 00:05:18.500 22:13:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.500 22:13:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.500 22:13:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.500 22:13:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.500 22:13:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:19.877 Creating new GPT entries in memory. 00:05:19.877 The operation has completed successfully. 00:05:19.877 22:13:18 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.877 22:13:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.877 22:13:18 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.877 22:13:18 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.877 22:13:18 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:20.813 The operation has completed successfully. 00:05:20.813 22:13:19 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.813 22:13:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.813 22:13:19 -- setup/common.sh@62 -- # wait 98787 00:05:20.813 22:13:19 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.814 22:13:19 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.814 22:13:19 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.814 22:13:19 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.814 22:13:19 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.814 22:13:19 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.814 22:13:19 -- setup/devices.sh@161 -- # break 00:05:20.814 22:13:19 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.814 22:13:19 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.814 22:13:19 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.814 22:13:19 -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.814 22:13:19 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:20.814 22:13:19 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:20.814 22:13:19 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.814 22:13:19 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:20.814 22:13:19 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.814 22:13:19 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.814 22:13:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.814 22:13:19 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.814 22:13:19 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.814 22:13:19 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:20.814 22:13:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.814 22:13:19 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.814 22:13:19 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.814 22:13:19 -- setup/devices.sh@53 -- # local found=0 00:05:20.814 22:13:19 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.814 22:13:19 -- setup/devices.sh@56 -- # : 00:05:20.814 22:13:19 -- setup/devices.sh@59 -- # local pci status 00:05:20.814 22:13:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.814 22:13:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:20.814 22:13:19 -- setup/devices.sh@47 -- # setup output config 00:05:20.814 22:13:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.814 22:13:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.071 22:13:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.071 22:13:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.071 22:13:19 -- setup/devices.sh@63 -- # found=1 00:05:21.071 22:13:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.071 22:13:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.071 22:13:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.071 22:13:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.071 22:13:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.500 22:13:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.500 22:13:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:22.500 22:13:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.500 22:13:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.500 22:13:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:22.500 22:13:21 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.758 22:13:21 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:22.758 22:13:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:22.758 22:13:21 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:22.758 22:13:21 -- setup/devices.sh@50 -- # local mount_point= 00:05:22.758 22:13:21 -- setup/devices.sh@51 -- # local test_file= 00:05:22.758 22:13:21 -- setup/devices.sh@53 -- # local found=0 00:05:22.758 22:13:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.758 22:13:21 -- setup/devices.sh@59 -- # local pci status 00:05:22.758 22:13:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.758 22:13:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:22.758 22:13:21 -- setup/devices.sh@47 -- # setup output config 00:05:22.758 22:13:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.758 22:13:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.758 22:13:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.758 22:13:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.758 22:13:21 -- setup/devices.sh@63 -- # found=1 00:05:22.758 22:13:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.758 22:13:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.758 22:13:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.016 22:13:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:23.016 22:13:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.950 22:13:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.950 22:13:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.950 22:13:22 -- setup/devices.sh@68 -- # return 0 00:05:23.950 22:13:22 -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.950 22:13:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.950 22:13:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.950 22:13:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.950 22:13:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.950 22:13:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.950 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.950 22:13:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.950 22:13:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.950 ************************************ 00:05:23.950 END TEST dm_mount 00:05:23.950 ************************************ 00:05:23.950 00:05:23.950 real 0m6.633s 00:05:23.950 user 0m0.499s 00:05:23.950 sys 0m2.866s 00:05:23.950 22:13:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.950 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.208 22:13:23 -- setup/devices.sh@1 -- # cleanup 00:05:24.208 22:13:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:24.208 22:13:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.208 22:13:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.208 22:13:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:24.208 22:13:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.208 22:13:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.208 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:24.208 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:24.208 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.208 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:24.208 22:13:23 -- setup/devices.sh@12 -- # cleanup_dm 00:05:24.208 22:13:23 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.208 22:13:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:24.208 22:13:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.208 22:13:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:24.208 22:13:23 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.208 22:13:23 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:24.208 ************************************ 00:05:24.208 END TEST devices 00:05:24.208 ************************************ 00:05:24.208 00:05:24.208 real 0m13.974s 00:05:24.208 user 0m1.648s 00:05:24.208 sys 0m6.873s 00:05:24.208 22:13:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.208 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.208 00:05:24.208 real 0m27.316s 00:05:24.208 user 0m6.116s 00:05:24.208 sys 0m15.894s 00:05:24.208 22:13:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.208 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.208 ************************************ 00:05:24.208 END TEST setup.sh 00:05:24.208 ************************************ 00:05:24.208 22:13:23 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:24.466 Hugepages 00:05:24.466 node hugesize free / total 00:05:24.466 node0 1048576kB 0 / 0 00:05:24.466 node0 2048kB 2048 / 2048 00:05:24.466 00:05:24.466 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.466 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:24.466 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:24.466 22:13:23 -- spdk/autotest.sh@141 -- # uname -s 00:05:24.466 22:13:23 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:24.466 22:13:23 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:24.466 22:13:23 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:25.030 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.963 22:13:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:26.896 22:13:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:26.896 22:13:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:26.896 22:13:25 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.896 22:13:25 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:26.896 22:13:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.896 22:13:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.896 22:13:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.896 22:13:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.896 22:13:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:27.154 22:13:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:27.154 22:13:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:27.154 22:13:26 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:27.412 Waiting for block devices as requested 00:05:27.412 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.412 22:13:26 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:27.412 22:13:26 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:27.412 22:13:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:27.412 22:13:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:27.412 22:13:26 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:27.412 22:13:26 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:27.412 22:13:26 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:27.412 22:13:26 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:27.412 22:13:26 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:27.412 22:13:26 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:27.412 22:13:26 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:27.412 22:13:26 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:27.412 22:13:26 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:27.412 22:13:26 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:27.412 22:13:26 -- common/autotest_common.sh@1542 -- # continue 00:05:27.412 22:13:26 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:27.412 22:13:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:27.412 22:13:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.412 22:13:26 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:27.412 22:13:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.412 22:13:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.670 22:13:26 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:27.929 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.307 22:13:28 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:29.307 22:13:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.307 22:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.307 22:13:28 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:29.307 22:13:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:29.307 22:13:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.307 22:13:28 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:29.307 22:13:28 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:29.307 22:13:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:29.307 22:13:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:29.307 22:13:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:29.307 22:13:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.307 22:13:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.307 22:13:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:29.307 22:13:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:29.307 22:13:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:29.307 22:13:28 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:29.307 22:13:28 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:29.307 22:13:28 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:29.307 22:13:28 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.307 22:13:28 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:29.307 22:13:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:29.307 22:13:28 -- common/autotest_common.sh@1578 -- # return 0 00:05:29.307 22:13:28 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:29.307 22:13:28 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.307 22:13:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.307 22:13:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.307 22:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.307 ************************************ 00:05:29.307 START TEST unittest 00:05:29.307 ************************************ 00:05:29.307 22:13:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.307 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.307 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.307 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:29.307 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.307 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:29.307 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:29.307 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:29.307 ++ rpc_py=rpc_cmd 00:05:29.307 ++ set -e 00:05:29.307 ++ shopt -s nullglob 00:05:29.307 ++ shopt -s extglob 00:05:29.307 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:29.307 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:29.307 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:29.307 +++ CONFIG_FIO_PLUGIN=y 00:05:29.307 +++ CONFIG_NVME_CUSE=y 00:05:29.307 +++ CONFIG_RAID5F=y 00:05:29.307 +++ CONFIG_LTO=n 00:05:29.307 +++ CONFIG_SMA=n 00:05:29.307 +++ CONFIG_ISAL=y 00:05:29.307 +++ CONFIG_OPENSSL_PATH= 00:05:29.307 +++ CONFIG_IDXD_KERNEL=n 00:05:29.307 +++ CONFIG_URING_PATH= 00:05:29.307 +++ CONFIG_DAOS=n 00:05:29.307 +++ CONFIG_DPDK_LIB_DIR= 00:05:29.307 +++ CONFIG_OCF=n 00:05:29.307 +++ CONFIG_EXAMPLES=y 00:05:29.307 +++ CONFIG_RDMA_PROV=verbs 00:05:29.307 +++ CONFIG_ISCSI_INITIATOR=y 00:05:29.307 +++ CONFIG_VTUNE=n 00:05:29.307 +++ CONFIG_DPDK_INC_DIR= 00:05:29.307 +++ CONFIG_CET=n 00:05:29.307 +++ CONFIG_TESTS=y 00:05:29.307 +++ CONFIG_APPS=y 00:05:29.307 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:29.307 +++ CONFIG_DAOS_DIR= 00:05:29.307 +++ CONFIG_CRYPTO_MLX5=n 00:05:29.307 +++ CONFIG_XNVME=n 00:05:29.307 +++ CONFIG_UNIT_TESTS=y 00:05:29.307 +++ CONFIG_FUSE=n 00:05:29.307 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:29.307 +++ CONFIG_OCF_PATH= 00:05:29.307 +++ CONFIG_WPDK_DIR= 00:05:29.308 +++ CONFIG_VFIO_USER=n 00:05:29.308 +++ CONFIG_MAX_LCORES= 00:05:29.308 +++ CONFIG_ARCH=native 00:05:29.308 +++ CONFIG_TSAN=n 00:05:29.308 +++ CONFIG_VIRTIO=y 00:05:29.308 +++ CONFIG_IPSEC_MB=n 00:05:29.308 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:29.308 +++ CONFIG_ASAN=y 00:05:29.308 +++ CONFIG_SHARED=n 00:05:29.308 +++ CONFIG_VTUNE_DIR= 00:05:29.308 +++ CONFIG_RDMA_SET_TOS=y 00:05:29.308 +++ CONFIG_VBDEV_COMPRESS=n 00:05:29.308 +++ CONFIG_VFIO_USER_DIR= 00:05:29.308 +++ CONFIG_FUZZER_LIB= 00:05:29.308 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:29.308 +++ CONFIG_USDT=n 00:05:29.308 +++ CONFIG_URING_ZNS=n 00:05:29.308 +++ CONFIG_FC_PATH= 00:05:29.308 +++ CONFIG_COVERAGE=y 00:05:29.308 +++ CONFIG_CUSTOMOCF=n 00:05:29.308 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:29.308 +++ CONFIG_WERROR=y 00:05:29.308 +++ CONFIG_DEBUG=y 00:05:29.308 +++ CONFIG_RDMA=y 00:05:29.308 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:29.308 +++ CONFIG_FUZZER=n 00:05:29.308 +++ CONFIG_FC=n 00:05:29.308 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.308 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:29.308 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:29.308 +++ CONFIG_CROSS_PREFIX= 00:05:29.308 +++ CONFIG_PREFIX=/usr/local 00:05:29.308 +++ CONFIG_HAVE_LIBBSD=n 00:05:29.308 +++ CONFIG_UBSAN=y 00:05:29.308 +++ CONFIG_PGO_CAPTURE=n 00:05:29.308 +++ CONFIG_UBLK=n 00:05:29.308 +++ CONFIG_ISAL_CRYPTO=y 00:05:29.308 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:29.308 +++ CONFIG_CRYPTO=n 00:05:29.308 +++ CONFIG_RBD=n 00:05:29.308 +++ CONFIG_LIBDIR= 00:05:29.308 +++ CONFIG_IPSEC_MB_DIR= 00:05:29.308 +++ CONFIG_PGO_USE=n 00:05:29.308 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.308 +++ CONFIG_GOLANG=n 00:05:29.308 +++ CONFIG_VHOST=y 00:05:29.308 +++ CONFIG_IDXD=y 00:05:29.308 +++ CONFIG_AVAHI=n 00:05:29.308 +++ CONFIG_URING=n 00:05:29.308 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:29.308 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:29.308 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:29.308 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:29.308 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:29.308 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:29.308 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:29.308 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:29.308 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:29.308 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:29.308 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:29.308 +++ VHOST_APP=("$_app_dir/vhost") 00:05:29.308 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:29.308 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:29.308 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:29.308 +++ [[ #ifndef SPDK_CONFIG_H 00:05:29.308 #define SPDK_CONFIG_H 00:05:29.308 #define SPDK_CONFIG_APPS 1 00:05:29.308 #define SPDK_CONFIG_ARCH native 00:05:29.308 #define SPDK_CONFIG_ASAN 1 00:05:29.308 #undef SPDK_CONFIG_AVAHI 00:05:29.308 #undef SPDK_CONFIG_CET 00:05:29.308 #define SPDK_CONFIG_COVERAGE 1 00:05:29.308 #define SPDK_CONFIG_CROSS_PREFIX 00:05:29.308 #undef SPDK_CONFIG_CRYPTO 00:05:29.308 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:29.308 #undef SPDK_CONFIG_CUSTOMOCF 00:05:29.308 #undef SPDK_CONFIG_DAOS 00:05:29.308 #define SPDK_CONFIG_DAOS_DIR 00:05:29.308 #define SPDK_CONFIG_DEBUG 1 00:05:29.308 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:29.308 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.308 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:29.308 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:29.308 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:29.308 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.308 #define SPDK_CONFIG_EXAMPLES 1 00:05:29.308 #undef SPDK_CONFIG_FC 00:05:29.308 #define SPDK_CONFIG_FC_PATH 00:05:29.308 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:29.308 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:29.308 #undef SPDK_CONFIG_FUSE 00:05:29.308 #undef SPDK_CONFIG_FUZZER 00:05:29.308 #define SPDK_CONFIG_FUZZER_LIB 00:05:29.308 #undef SPDK_CONFIG_GOLANG 00:05:29.308 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:29.308 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:29.308 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:29.308 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:29.308 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:29.308 #define SPDK_CONFIG_IDXD 1 00:05:29.308 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:29.308 #undef SPDK_CONFIG_IPSEC_MB 00:05:29.308 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:29.308 #define SPDK_CONFIG_ISAL 1 00:05:29.308 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:29.308 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:29.308 #define SPDK_CONFIG_LIBDIR 00:05:29.308 #undef SPDK_CONFIG_LTO 00:05:29.308 #define SPDK_CONFIG_MAX_LCORES 00:05:29.308 #define SPDK_CONFIG_NVME_CUSE 1 00:05:29.308 #undef SPDK_CONFIG_OCF 00:05:29.308 #define SPDK_CONFIG_OCF_PATH 00:05:29.308 #define SPDK_CONFIG_OPENSSL_PATH 00:05:29.308 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:29.308 #undef SPDK_CONFIG_PGO_USE 00:05:29.308 #define SPDK_CONFIG_PREFIX /usr/local 00:05:29.308 #define SPDK_CONFIG_RAID5F 1 00:05:29.308 #undef SPDK_CONFIG_RBD 00:05:29.308 #define SPDK_CONFIG_RDMA 1 00:05:29.308 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:29.308 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:29.308 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:29.308 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:29.308 #undef SPDK_CONFIG_SHARED 00:05:29.308 #undef SPDK_CONFIG_SMA 00:05:29.308 #define SPDK_CONFIG_TESTS 1 00:05:29.308 #undef SPDK_CONFIG_TSAN 00:05:29.308 #undef SPDK_CONFIG_UBLK 00:05:29.308 #define SPDK_CONFIG_UBSAN 1 00:05:29.308 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:29.308 #undef SPDK_CONFIG_URING 00:05:29.308 #define SPDK_CONFIG_URING_PATH 00:05:29.308 #undef SPDK_CONFIG_URING_ZNS 00:05:29.308 #undef SPDK_CONFIG_USDT 00:05:29.308 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:29.308 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:29.308 #undef SPDK_CONFIG_VFIO_USER 00:05:29.308 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:29.308 #define SPDK_CONFIG_VHOST 1 00:05:29.308 #define SPDK_CONFIG_VIRTIO 1 00:05:29.308 #undef SPDK_CONFIG_VTUNE 00:05:29.308 #define SPDK_CONFIG_VTUNE_DIR 00:05:29.308 #define SPDK_CONFIG_WERROR 1 00:05:29.308 #define SPDK_CONFIG_WPDK_DIR 00:05:29.308 #undef SPDK_CONFIG_XNVME 00:05:29.308 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:29.308 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:29.308 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.308 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:29.308 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.308 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.308 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.308 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.308 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.308 ++++ export PATH 00:05:29.308 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.308 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:29.308 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:29.308 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:29.308 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:29.308 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:29.308 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:29.308 +++ TEST_TAG=N/A 00:05:29.308 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:29.308 ++ : 1 00:05:29.308 ++ export RUN_NIGHTLY 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_RUN_VALGRIND 00:05:29.308 ++ : 1 00:05:29.308 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:29.308 ++ : 1 00:05:29.308 ++ export SPDK_TEST_UNITTEST 00:05:29.308 ++ : 00:05:29.308 ++ export SPDK_TEST_AUTOBUILD 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_RELEASE_BUILD 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_ISAL 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_ISCSI 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:29.308 ++ : 1 00:05:29.308 ++ export SPDK_TEST_NVME 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_NVME_PMR 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_NVME_BP 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_NVME_CLI 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_NVME_CUSE 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_NVME_FDP 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_NVMF 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_VFIOUSER 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_FUZZER 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_FUZZER_SHORT 00:05:29.308 ++ : rdma 00:05:29.308 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_RBD 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_VHOST 00:05:29.308 ++ : 1 00:05:29.308 ++ export SPDK_TEST_BLOCKDEV 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_IOAT 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_BLOBFS 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_VHOST_INIT 00:05:29.308 ++ : 0 00:05:29.308 ++ export SPDK_TEST_LVOL 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:29.309 ++ : 1 00:05:29.309 ++ export SPDK_RUN_ASAN 00:05:29.309 ++ : 1 00:05:29.309 ++ export SPDK_RUN_UBSAN 00:05:29.309 ++ : 00:05:29.309 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_RUN_NON_ROOT 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_CRYPTO 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_FTL 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_OCF 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_VMD 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_OPAL 00:05:29.309 ++ : 00:05:29.309 ++ export SPDK_TEST_NATIVE_DPDK 00:05:29.309 ++ : true 00:05:29.309 ++ export SPDK_AUTOTEST_X 00:05:29.309 ++ : 1 00:05:29.309 ++ export SPDK_TEST_RAID5 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_URING 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_USDT 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_USE_IGB_UIO 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_SCHEDULER 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_SCANBUILD 00:05:29.309 ++ : 00:05:29.309 ++ export SPDK_TEST_NVMF_NICS 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_SMA 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_DAOS 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_XNVME 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_ACCEL_DSA 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_ACCEL_IAA 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_ACCEL_IOAT 00:05:29.309 ++ : 00:05:29.309 ++ export SPDK_TEST_FUZZER_TARGET 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_TEST_NVMF_MDNS 00:05:29.309 ++ : 0 00:05:29.309 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:29.309 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:29.309 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:29.309 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:29.309 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:29.309 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.309 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.309 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.309 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.309 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:29.309 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:29.309 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:29.309 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:29.309 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:29.309 ++ PYTHONDONTWRITEBYTECODE=1 00:05:29.309 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:29.309 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:29.309 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:29.309 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:29.309 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:29.309 ++ rm -rf /var/tmp/asan_suppression_file 00:05:29.309 ++ cat 00:05:29.309 ++ echo leak:libfuse3.so 00:05:29.309 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:29.309 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:29.309 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:29.309 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:29.309 ++ '[' -z /var/spdk/dependencies ']' 00:05:29.309 ++ export DEPENDENCY_DIR 00:05:29.309 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:29.309 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:29.309 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:29.309 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:29.309 ++ export QEMU_BIN= 00:05:29.309 ++ QEMU_BIN= 00:05:29.309 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:29.309 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:29.309 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:29.309 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:29.309 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:29.309 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:29.309 ++ '[' 0 -eq 0 ']' 00:05:29.309 ++ export valgrind= 00:05:29.309 ++ valgrind= 00:05:29.309 +++ uname -s 00:05:29.309 ++ '[' Linux = Linux ']' 00:05:29.309 ++ HUGEMEM=4096 00:05:29.309 ++ export CLEAR_HUGE=yes 00:05:29.309 ++ CLEAR_HUGE=yes 00:05:29.309 ++ [[ 0 -eq 1 ]] 00:05:29.309 ++ [[ 0 -eq 1 ]] 00:05:29.309 ++ MAKE=make 00:05:29.309 +++ nproc 00:05:29.309 ++ MAKEFLAGS=-j10 00:05:29.309 ++ export HUGEMEM=4096 00:05:29.309 ++ HUGEMEM=4096 00:05:29.309 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:29.309 ++ NO_HUGE=() 00:05:29.309 ++ TEST_MODE= 00:05:29.309 ++ [[ -z '' ]] 00:05:29.309 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:29.309 ++ exec 00:05:29.309 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:29.309 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:29.309 ++ set_test_storage 2147483648 00:05:29.309 ++ [[ -v testdir ]] 00:05:29.309 ++ local requested_size=2147483648 00:05:29.309 ++ local mount target_dir 00:05:29.309 ++ local -A mounts fss sizes avails uses 00:05:29.309 ++ local source fs size avail mount use 00:05:29.309 ++ local storage_fallback storage_candidates 00:05:29.309 +++ mktemp -udt spdk.XXXXXX 00:05:29.309 ++ storage_fallback=/tmp/spdk.mgTfU1 00:05:29.309 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:29.309 ++ [[ -n '' ]] 00:05:29.309 ++ [[ -n '' ]] 00:05:29.309 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.mgTfU1/tests/unit /tmp/spdk.mgTfU1 00:05:29.309 ++ requested_size=2214592512 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 +++ df -T 00:05:29.309 +++ grep -v Filesystem 00:05:29.309 ++ mounts["$mount"]=udev 00:05:29.309 ++ fss["$mount"]=devtmpfs 00:05:29.309 ++ avails["$mount"]=6224465920 00:05:29.309 ++ sizes["$mount"]=6224465920 00:05:29.309 ++ uses["$mount"]=0 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=tmpfs 00:05:29.309 ++ fss["$mount"]=tmpfs 00:05:29.309 ++ avails["$mount"]=1253408768 00:05:29.309 ++ sizes["$mount"]=1254514688 00:05:29.309 ++ uses["$mount"]=1105920 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=/dev/vda1 00:05:29.309 ++ fss["$mount"]=ext4 00:05:29.309 ++ avails["$mount"]=10737549312 00:05:29.309 ++ sizes["$mount"]=20616794112 00:05:29.309 ++ uses["$mount"]=9862467584 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=tmpfs 00:05:29.309 ++ fss["$mount"]=tmpfs 00:05:29.309 ++ avails["$mount"]=6272565248 00:05:29.309 ++ sizes["$mount"]=6272565248 00:05:29.309 ++ uses["$mount"]=0 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=tmpfs 00:05:29.309 ++ fss["$mount"]=tmpfs 00:05:29.309 ++ avails["$mount"]=5242880 00:05:29.309 ++ sizes["$mount"]=5242880 00:05:29.309 ++ uses["$mount"]=0 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=tmpfs 00:05:29.309 ++ fss["$mount"]=tmpfs 00:05:29.309 ++ avails["$mount"]=6272565248 00:05:29.309 ++ sizes["$mount"]=6272565248 00:05:29.309 ++ uses["$mount"]=0 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=/dev/loop0 00:05:29.309 ++ fss["$mount"]=squashfs 00:05:29.309 ++ avails["$mount"]=0 00:05:29.309 ++ sizes["$mount"]=67108864 00:05:29.309 ++ uses["$mount"]=67108864 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=/dev/loop1 00:05:29.309 ++ fss["$mount"]=squashfs 00:05:29.309 ++ avails["$mount"]=0 00:05:29.309 ++ sizes["$mount"]=41025536 00:05:29.309 ++ uses["$mount"]=41025536 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=/dev/vda15 00:05:29.309 ++ fss["$mount"]=vfat 00:05:29.309 ++ avails["$mount"]=103089152 00:05:29.309 ++ sizes["$mount"]=109422592 00:05:29.309 ++ uses["$mount"]=6334464 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=/dev/loop2 00:05:29.309 ++ fss["$mount"]=squashfs 00:05:29.309 ++ avails["$mount"]=0 00:05:29.309 ++ sizes["$mount"]=96337920 00:05:29.309 ++ uses["$mount"]=96337920 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=tmpfs 00:05:29.309 ++ fss["$mount"]=tmpfs 00:05:29.309 ++ avails["$mount"]=1254510592 00:05:29.309 ++ sizes["$mount"]=1254510592 00:05:29.309 ++ uses["$mount"]=0 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:05:29.309 ++ fss["$mount"]=fuse.sshfs 00:05:29.309 ++ avails["$mount"]=96567160832 00:05:29.309 ++ sizes["$mount"]=105088212992 00:05:29.309 ++ uses["$mount"]=3135619072 00:05:29.309 ++ read -r source fs size use avail _ mount 00:05:29.309 ++ printf '* Looking for test storage...\n' 00:05:29.309 * Looking for test storage... 00:05:29.309 ++ local target_space new_size 00:05:29.309 ++ for target_dir in "${storage_candidates[@]}" 00:05:29.309 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.309 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:29.309 ++ mount=/ 00:05:29.309 ++ target_space=10737549312 00:05:29.309 ++ (( target_space == 0 || target_space < requested_size )) 00:05:29.309 ++ (( target_space >= requested_size )) 00:05:29.309 ++ [[ ext4 == tmpfs ]] 00:05:29.309 ++ [[ ext4 == ramfs ]] 00:05:29.309 ++ [[ / == / ]] 00:05:29.309 ++ new_size=12077060096 00:05:29.309 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:29.309 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:29.309 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:29.309 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.309 ++ return 0 00:05:29.310 ++ set -o errtrace 00:05:29.310 ++ shopt -s extdebug 00:05:29.310 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:29.310 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:29.310 22:13:28 -- common/autotest_common.sh@1672 -- # true 00:05:29.310 22:13:28 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:29.310 22:13:28 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:29.310 22:13:28 -- common/autotest_common.sh@29 -- # exec 00:05:29.310 22:13:28 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:29.310 22:13:28 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:29.310 22:13:28 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:29.310 22:13:28 -- common/autotest_common.sh@18 -- # set -x 00:05:29.310 22:13:28 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:29.310 22:13:28 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:29.310 22:13:28 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:29.310 22:13:28 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:29.310 22:13:28 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:29.310 22:13:28 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:29.310 22:13:28 -- unit/unittest.sh@179 -- # hash lcov 00:05:29.310 22:13:28 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:29.310 22:13:28 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:29.310 22:13:28 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:29.310 22:13:28 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:29.310 22:13:28 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:29.310 22:13:28 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:29.310 22:13:28 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:29.310 22:13:28 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:29.310 --rc lcov_branch_coverage=1 00:05:29.310 --rc lcov_function_coverage=1 00:05:29.310 --rc genhtml_branch_coverage=1 00:05:29.310 --rc genhtml_function_coverage=1 00:05:29.310 --rc genhtml_legend=1 00:05:29.310 --rc geninfo_all_blocks=1 00:05:29.310 ' 00:05:29.310 22:13:28 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:29.310 --rc lcov_branch_coverage=1 00:05:29.310 --rc lcov_function_coverage=1 00:05:29.310 --rc genhtml_branch_coverage=1 00:05:29.310 --rc genhtml_function_coverage=1 00:05:29.310 --rc genhtml_legend=1 00:05:29.310 --rc geninfo_all_blocks=1 00:05:29.310 ' 00:05:29.310 22:13:28 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:29.310 --rc lcov_branch_coverage=1 00:05:29.310 --rc lcov_function_coverage=1 00:05:29.310 --rc genhtml_branch_coverage=1 00:05:29.310 --rc genhtml_function_coverage=1 00:05:29.310 --rc genhtml_legend=1 00:05:29.310 --rc geninfo_all_blocks=1 00:05:29.310 --no-external' 00:05:29.310 22:13:28 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:29.310 --rc lcov_branch_coverage=1 00:05:29.310 --rc lcov_function_coverage=1 00:05:29.310 --rc genhtml_branch_coverage=1 00:05:29.310 --rc genhtml_function_coverage=1 00:05:29.310 --rc genhtml_legend=1 00:05:29.310 --rc geninfo_all_blocks=1 00:05:29.310 --no-external' 00:05:29.310 22:13:28 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:31.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:31.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:31.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:31.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:31.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:31.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:31.484 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:31.484 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:31.484 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:31.484 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:31.484 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:18.173 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:18.173 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:18.173 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:18.173 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:18.173 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:18.173 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:18.173 22:14:16 -- unit/unittest.sh@206 -- # uname -m 00:06:18.173 22:14:16 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:18.173 22:14:16 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:18.173 22:14:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.173 22:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.173 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.173 ************************************ 00:06:18.173 START TEST unittest_pci_event 00:06:18.173 ************************************ 00:06:18.173 22:14:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:18.173 00:06:18.173 00:06:18.173 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.173 http://cunit.sourceforge.net/ 00:06:18.173 00:06:18.173 00:06:18.173 Suite: pci_event 00:06:18.173 Test: test_pci_parse_event ...[2024-07-14 22:14:16.898829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:18.173 [2024-07-14 22:14:16.899167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:18.173 passed 00:06:18.173 00:06:18.173 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.173 suites 1 1 n/a 0 0 00:06:18.173 tests 1 1 1 0 0 00:06:18.173 asserts 15 15 15 0 n/a 00:06:18.173 00:06:18.173 Elapsed time = 0.001 seconds 00:06:18.173 ************************************ 00:06:18.173 END TEST unittest_pci_event 00:06:18.173 ************************************ 00:06:18.173 00:06:18.173 real 0m0.032s 00:06:18.173 user 0m0.009s 00:06:18.173 sys 0m0.019s 00:06:18.173 22:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.173 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.173 22:14:16 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:18.173 22:14:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.173 22:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.173 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.173 ************************************ 00:06:18.173 START TEST unittest_include 00:06:18.173 ************************************ 00:06:18.173 22:14:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:18.173 00:06:18.173 00:06:18.173 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.173 http://cunit.sourceforge.net/ 00:06:18.173 00:06:18.173 00:06:18.173 Suite: histogram 00:06:18.173 Test: histogram_test ...passed 00:06:18.173 Test: histogram_merge ...passed 00:06:18.173 00:06:18.173 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.173 suites 1 1 n/a 0 0 00:06:18.173 tests 2 2 2 0 0 00:06:18.173 asserts 50 50 50 0 n/a 00:06:18.173 00:06:18.173 Elapsed time = 0.006 seconds 00:06:18.174 00:06:18.174 real 0m0.031s 00:06:18.174 user 0m0.019s 00:06:18.174 sys 0m0.012s 00:06:18.174 22:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.174 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.174 ************************************ 00:06:18.174 END TEST unittest_include 00:06:18.174 ************************************ 00:06:18.174 22:14:17 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:18.174 22:14:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.174 22:14:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.174 22:14:17 -- common/autotest_common.sh@10 -- # set +x 00:06:18.174 ************************************ 00:06:18.174 START TEST unittest_bdev 00:06:18.174 ************************************ 00:06:18.174 22:14:17 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:06:18.174 22:14:17 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:18.174 00:06:18.174 00:06:18.174 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.174 http://cunit.sourceforge.net/ 00:06:18.174 00:06:18.174 00:06:18.174 Suite: bdev 00:06:18.174 Test: bytes_to_blocks_test ...passed 00:06:18.174 Test: num_blocks_test ...passed 00:06:18.174 Test: io_valid_test ...passed 00:06:18.174 Test: open_write_test ...[2024-07-14 22:14:17.130259] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:18.174 [2024-07-14 22:14:17.131153] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:18.174 [2024-07-14 22:14:17.131437] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:18.174 passed 00:06:18.174 Test: claim_test ...passed 00:06:18.174 Test: alias_add_del_test ...[2024-07-14 22:14:17.233165] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:18.174 [2024-07-14 22:14:17.233556] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:18.174 [2024-07-14 22:14:17.233751] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:18.433 passed 00:06:18.433 Test: get_device_stat_test ...passed 00:06:18.433 Test: bdev_io_types_test ...passed 00:06:18.433 Test: bdev_io_wait_test ...passed 00:06:18.433 Test: bdev_io_spans_split_test ...passed 00:06:18.433 Test: bdev_io_boundary_split_test ...passed 00:06:18.433 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-14 22:14:17.414152] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:18.433 passed 00:06:18.433 Test: bdev_io_mix_split_test ...passed 00:06:18.691 Test: bdev_io_split_with_io_wait ...passed 00:06:18.691 Test: bdev_io_write_unit_split_test ...[2024-07-14 22:14:17.548216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:18.691 [2024-07-14 22:14:17.548573] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:18.691 [2024-07-14 22:14:17.548707] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:18.691 [2024-07-14 22:14:17.548860] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:18.691 passed 00:06:18.691 Test: bdev_io_alignment_with_boundary ...passed 00:06:18.691 Test: bdev_io_alignment ...passed 00:06:18.691 Test: bdev_histograms ...passed 00:06:18.691 Test: bdev_write_zeroes ...passed 00:06:18.950 Test: bdev_compare_and_write ...passed 00:06:18.950 Test: bdev_compare ...passed 00:06:18.950 Test: bdev_compare_emulated ...passed 00:06:19.209 Test: bdev_zcopy_write ...passed 00:06:19.209 Test: bdev_zcopy_read ...passed 00:06:19.209 Test: bdev_open_while_hotremove ...passed 00:06:19.209 Test: bdev_close_while_hotremove ...passed 00:06:19.209 Test: bdev_open_ext_test ...[2024-07-14 22:14:18.063605] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:19.209 passed 00:06:19.209 Test: bdev_open_ext_unregister ...passed 00:06:19.209 Test: bdev_set_io_timeout ...[2024-07-14 22:14:18.064135] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:19.209 passed 00:06:19.209 Test: bdev_set_qd_sampling ...passed 00:06:19.209 Test: lba_range_overlap ...passed 00:06:19.209 Test: lock_lba_range_check_ranges ...passed 00:06:19.209 Test: lock_lba_range_with_io_outstanding ...passed 00:06:19.469 Test: lock_lba_range_overlapped ...passed 00:06:19.469 Test: bdev_quiesce ...[2024-07-14 22:14:18.306868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:19.469 passed 00:06:19.469 Test: bdev_io_abort ...passed 00:06:19.469 Test: bdev_unmap ...passed 00:06:19.469 Test: bdev_write_zeroes_split_test ...passed 00:06:19.469 Test: bdev_set_options_test ...[2024-07-14 22:14:18.466326] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:19.469 passed 00:06:19.469 Test: bdev_get_memory_domains ...passed 00:06:19.469 Test: bdev_io_ext ...passed 00:06:19.727 Test: bdev_io_ext_no_opts ...passed 00:06:19.727 Test: bdev_io_ext_invalid_opts ...passed 00:06:19.727 Test: bdev_io_ext_split ...passed 00:06:19.727 Test: bdev_io_ext_bounce_buffer ...passed 00:06:19.727 Test: bdev_register_uuid_alias ...[2024-07-14 22:14:18.705453] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name ba2b8753-ab2d-4455-aeee-8c94ec1df5ef already exists 00:06:19.727 [2024-07-14 22:14:18.705530] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:ba2b8753-ab2d-4455-aeee-8c94ec1df5ef alias for bdev bdev0 00:06:19.727 passed 00:06:19.727 Test: bdev_unregister_by_name ...passed 00:06:19.727 Test: for_each_bdev_test ...[2024-07-14 22:14:18.728561] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:19.727 [2024-07-14 22:14:18.728629] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:19.727 passed 00:06:19.727 Test: bdev_seek_test ...passed 00:06:19.986 Test: bdev_copy ...passed 00:06:19.986 Test: bdev_copy_split_test ...passed 00:06:19.986 Test: examine_locks ...passed 00:06:19.986 Test: claim_v2_rwo ...[2024-07-14 22:14:18.856030] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:19.986 [2024-07-14 22:14:18.856121] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:19.986 [2024-07-14 22:14:18.856142] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:19.986 [2024-07-14 22:14:18.856199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:19.986 [2024-07-14 22:14:18.856217] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:19.986 passed 00:06:19.986 Test: claim_v2_rom ...[2024-07-14 22:14:18.856268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:19.986 [2024-07-14 22:14:18.856440] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:19.986 [2024-07-14 22:14:18.856493] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:19.986 [2024-07-14 22:14:18.856518] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856543] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856577] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:19.987 passed 00:06:19.987 Test: claim_v2_rwm ...[2024-07-14 22:14:18.856610] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:19.987 [2024-07-14 22:14:18.856733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:19.987 [2024-07-14 22:14:18.856785] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856817] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856842] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856860] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856885] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.856918] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:19.987 passed 00:06:19.987 Test: claim_v2_existing_writer ...[2024-07-14 22:14:18.857058] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:19.987 [2024-07-14 22:14:18.857086] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:19.987 passed 00:06:19.987 Test: claim_v2_existing_v1 ...[2024-07-14 22:14:18.857195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.857225] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:19.987 passed 00:06:19.987 Test: claim_v1_existing_v2 ...[2024-07-14 22:14:18.857244] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.857377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:19.987 passed 00:06:19.987 Test: examine_claimed ...[2024-07-14 22:14:18.857428] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.857462] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:19.987 [2024-07-14 22:14:18.857754] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:19.987 passed 00:06:19.987 00:06:19.987 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.987 suites 1 1 n/a 0 0 00:06:19.987 tests 59 59 59 0 0 00:06:19.987 asserts 4599 4599 4599 0 n/a 00:06:19.987 00:06:19.987 Elapsed time = 1.799 seconds 00:06:19.987 22:14:18 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:19.987 00:06:19.987 00:06:19.987 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.987 http://cunit.sourceforge.net/ 00:06:19.987 00:06:19.987 00:06:19.987 Suite: nvme 00:06:19.987 Test: test_create_ctrlr ...passed 00:06:19.987 Test: test_reset_ctrlr ...[2024-07-14 22:14:18.903260] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:19.987 Test: test_failover_ctrlr ...passed 00:06:19.987 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-14 22:14:18.905942] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.906163] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.906366] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_pending_reset ...[2024-07-14 22:14:18.907857] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.908183] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_attach_ctrlr ...[2024-07-14 22:14:18.909460] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:19.987 passed 00:06:19.987 Test: test_aer_cb ...passed 00:06:19.987 Test: test_submit_nvme_cmd ...passed 00:06:19.987 Test: test_add_remove_trid ...passed 00:06:19.987 Test: test_abort ...[2024-07-14 22:14:18.912992] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:19.987 passed 00:06:19.987 Test: test_get_io_qpair ...passed 00:06:19.987 Test: test_bdev_unregister ...passed 00:06:19.987 Test: test_compare_ns ...passed 00:06:19.987 Test: test_init_ana_log_page ...passed 00:06:19.987 Test: test_get_memory_domains ...passed 00:06:19.987 Test: test_reconnect_qpair ...[2024-07-14 22:14:18.915799] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_create_bdev_ctrlr ...[2024-07-14 22:14:18.916400] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:19.987 passed 00:06:19.987 Test: test_add_multi_ns_to_bdev ...[2024-07-14 22:14:18.917676] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:19.987 passed 00:06:19.987 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:19.987 Test: test_admin_path ...passed 00:06:19.987 Test: test_reset_bdev_ctrlr ...passed 00:06:19.987 Test: test_find_io_path ...passed 00:06:19.987 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:19.987 Test: test_retry_io_for_io_path_error ...passed 00:06:19.987 Test: test_retry_io_count ...passed 00:06:19.987 Test: test_concurrent_read_ana_log_page ...passed 00:06:19.987 Test: test_retry_io_for_ana_error ...passed 00:06:19.987 Test: test_check_io_error_resiliency_params ...[2024-07-14 22:14:18.924890] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:19.987 [2024-07-14 22:14:18.924973] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:19.987 [2024-07-14 22:14:18.924999] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:19.987 [2024-07-14 22:14:18.925027] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:19.987 [2024-07-14 22:14:18.925054] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:19.987 [2024-07-14 22:14:18.925083] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:19.987 passed 00:06:19.987 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-14 22:14:18.925103] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:19.987 [2024-07-14 22:14:18.925164] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:19.987 [2024-07-14 22:14:18.925199] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:19.987 passed 00:06:19.987 Test: test_reconnect_ctrlr ...[2024-07-14 22:14:18.926044] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.926196] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.926463] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.926588] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.926746] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_retry_failover_ctrlr ...[2024-07-14 22:14:18.927152] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_fail_path ...[2024-07-14 22:14:18.927715] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.927914] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.928028] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.928176] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 [2024-07-14 22:14:18.928320] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_nvme_ns_cmp ...passed 00:06:19.987 Test: test_ana_transition ...passed 00:06:19.987 Test: test_set_preferred_path ...passed 00:06:19.987 Test: test_find_next_io_path ...passed 00:06:19.987 Test: test_find_io_path_min_qd ...passed 00:06:19.987 Test: test_disable_auto_failback ...[2024-07-14 22:14:18.930131] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.987 passed 00:06:19.987 Test: test_set_multipath_policy ...passed 00:06:19.987 Test: test_uuid_generation ...passed 00:06:19.987 Test: test_retry_io_to_same_path ...passed 00:06:19.988 Test: test_race_between_reset_and_disconnected ...passed 00:06:19.988 Test: test_ctrlr_op_rpc ...passed 00:06:19.988 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:19.988 Test: test_disable_enable_ctrlr ...[2024-07-14 22:14:18.933989] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.988 [2024-07-14 22:14:18.934173] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:19.988 passed 00:06:19.988 Test: test_delete_ctrlr_done ...passed 00:06:19.988 Test: test_ns_remove_during_reset ...passed 00:06:19.988 00:06:19.988 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.988 suites 1 1 n/a 0 0 00:06:19.988 tests 48 48 48 0 0 00:06:19.988 asserts 3553 3553 3553 0 n/a 00:06:19.988 00:06:19.988 Elapsed time = 0.033 seconds 00:06:19.988 22:14:18 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:19.988 Test Options 00:06:19.988 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:19.988 00:06:19.988 00:06:19.988 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.988 http://cunit.sourceforge.net/ 00:06:19.988 00:06:19.988 00:06:19.988 Suite: raid 00:06:19.988 Test: test_create_raid ...passed 00:06:19.988 Test: test_create_raid_superblock ...passed 00:06:19.988 Test: test_delete_raid ...passed 00:06:19.988 Test: test_create_raid_invalid_args ...[2024-07-14 22:14:18.974637] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:19.988 [2024-07-14 22:14:18.975007] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:19.988 [2024-07-14 22:14:18.975384] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:19.988 [2024-07-14 22:14:18.975589] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:19.988 [2024-07-14 22:14:18.976298] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:19.988 passed 00:06:19.988 Test: test_delete_raid_invalid_args ...passed 00:06:19.988 Test: test_io_channel ...passed 00:06:19.988 Test: test_reset_io ...passed 00:06:19.988 Test: test_write_io ...passed 00:06:19.988 Test: test_read_io ...passed 00:06:20.923 Test: test_unmap_io ...passed 00:06:20.923 Test: test_io_failure ...[2024-07-14 22:14:19.929644] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:20.923 passed 00:06:20.923 Test: test_multi_raid_no_io ...passed 00:06:20.923 Test: test_multi_raid_with_io ...passed 00:06:20.923 Test: test_io_type_supported ...passed 00:06:20.923 Test: test_raid_json_dump_info ...passed 00:06:20.923 Test: test_context_size ...passed 00:06:20.923 Test: test_raid_level_conversions ...passed 00:06:20.923 Test: test_raid_process ...passed 00:06:20.923 Test: test_raid_io_split ...passed 00:06:20.923 00:06:20.923 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.923 suites 1 1 n/a 0 0 00:06:20.923 tests 19 19 19 0 0 00:06:20.923 asserts 177879 177879 177879 0 n/a 00:06:20.923 00:06:20.923 Elapsed time = 0.968 seconds 00:06:20.923 22:14:19 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:20.923 00:06:20.923 00:06:20.923 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.923 http://cunit.sourceforge.net/ 00:06:20.923 00:06:20.923 00:06:20.923 Suite: raid_sb 00:06:20.923 Test: test_raid_bdev_write_superblock ...passed 00:06:20.923 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:20.923 Test: test_raid_bdev_parse_superblock ...passed 00:06:20.923 00:06:20.923 [2024-07-14 22:14:19.983085] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:20.923 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.923 suites 1 1 n/a 0 0 00:06:20.923 tests 3 3 3 0 0 00:06:20.923 asserts 32 32 32 0 n/a 00:06:20.923 00:06:20.923 Elapsed time = 0.001 seconds 00:06:21.182 22:14:19 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:21.182 00:06:21.182 00:06:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.182 http://cunit.sourceforge.net/ 00:06:21.182 00:06:21.182 00:06:21.182 Suite: concat 00:06:21.182 Test: test_concat_start ...passed 00:06:21.182 Test: test_concat_rw ...passed 00:06:21.182 Test: test_concat_null_payload ...passed 00:06:21.182 00:06:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.182 suites 1 1 n/a 0 0 00:06:21.182 tests 3 3 3 0 0 00:06:21.182 asserts 8097 8097 8097 0 n/a 00:06:21.182 00:06:21.182 Elapsed time = 0.007 seconds 00:06:21.182 22:14:20 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:21.182 00:06:21.182 00:06:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.182 http://cunit.sourceforge.net/ 00:06:21.182 00:06:21.182 00:06:21.182 Suite: raid1 00:06:21.182 Test: test_raid1_start ...passed 00:06:21.182 Test: test_raid1_read_balancing ...passed 00:06:21.182 00:06:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.182 suites 1 1 n/a 0 0 00:06:21.182 tests 2 2 2 0 0 00:06:21.182 asserts 2856 2856 2856 0 n/a 00:06:21.182 00:06:21.182 Elapsed time = 0.004 seconds 00:06:21.182 22:14:20 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:21.182 00:06:21.182 00:06:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.182 http://cunit.sourceforge.net/ 00:06:21.182 00:06:21.182 00:06:21.182 Suite: zone 00:06:21.182 Test: test_zone_get_operation ...passed 00:06:21.182 Test: test_bdev_zone_get_info ...passed 00:06:21.182 Test: test_bdev_zone_management ...passed 00:06:21.182 Test: test_bdev_zone_append ...passed 00:06:21.182 Test: test_bdev_zone_append_with_md ...passed 00:06:21.182 Test: test_bdev_zone_appendv ...passed 00:06:21.182 Test: test_bdev_zone_appendv_with_md ...passed 00:06:21.182 Test: test_bdev_io_get_append_location ...passed 00:06:21.182 00:06:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.182 suites 1 1 n/a 0 0 00:06:21.182 tests 8 8 8 0 0 00:06:21.182 asserts 94 94 94 0 n/a 00:06:21.182 00:06:21.182 Elapsed time = 0.000 seconds 00:06:21.182 22:14:20 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:21.182 00:06:21.182 00:06:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.182 http://cunit.sourceforge.net/ 00:06:21.182 00:06:21.182 00:06:21.182 Suite: gpt_parse 00:06:21.182 Test: test_parse_mbr_and_primary ...[2024-07-14 22:14:20.138070] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:21.182 [2024-07-14 22:14:20.138385] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:21.182 [2024-07-14 22:14:20.138443] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:21.182 [2024-07-14 22:14:20.138529] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:21.182 [2024-07-14 22:14:20.138575] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:21.182 [2024-07-14 22:14:20.138654] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:21.182 passed 00:06:21.182 Test: test_parse_secondary ...[2024-07-14 22:14:20.139421] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:21.182 [2024-07-14 22:14:20.139466] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:21.182 [2024-07-14 22:14:20.139495] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:21.182 [2024-07-14 22:14:20.139521] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:21.182 passed 00:06:21.182 Test: test_check_mbr ...passed 00:06:21.182 Test: test_read_header ...[2024-07-14 22:14:20.140302] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:21.182 [2024-07-14 22:14:20.140356] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:21.182 [2024-07-14 22:14:20.140424] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:21.182 passed 00:06:21.182 Test: test_read_partitions ...[2024-07-14 22:14:20.140525] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:21.182 [2024-07-14 22:14:20.140595] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:21.182 [2024-07-14 22:14:20.140642] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:21.182 [2024-07-14 22:14:20.140683] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:21.182 [2024-07-14 22:14:20.140709] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:21.182 [2024-07-14 22:14:20.140762] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:21.182 [2024-07-14 22:14:20.140807] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:21.182 [2024-07-14 22:14:20.140836] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:21.182 [2024-07-14 22:14:20.140856] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:21.182 [2024-07-14 22:14:20.141236] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:21.182 passed 00:06:21.182 00:06:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.182 suites 1 1 n/a 0 0 00:06:21.182 tests 5 5 5 0 0 00:06:21.182 asserts 33 33 33 0 n/a 00:06:21.182 00:06:21.182 Elapsed time = 0.004 seconds 00:06:21.182 22:14:20 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:21.182 00:06:21.182 00:06:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.182 http://cunit.sourceforge.net/ 00:06:21.182 00:06:21.182 00:06:21.182 Suite: bdev_part 00:06:21.182 Test: part_test ...[2024-07-14 22:14:20.179509] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:21.182 passed 00:06:21.182 Test: part_free_test ...passed 00:06:21.182 Test: part_get_io_channel_test ...passed 00:06:21.182 Test: part_construct_ext ...passed 00:06:21.182 00:06:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.182 suites 1 1 n/a 0 0 00:06:21.182 tests 4 4 4 0 0 00:06:21.182 asserts 48 48 48 0 n/a 00:06:21.182 00:06:21.182 Elapsed time = 0.053 seconds 00:06:21.523 22:14:20 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:21.523 00:06:21.523 00:06:21.523 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.523 http://cunit.sourceforge.net/ 00:06:21.523 00:06:21.523 00:06:21.523 Suite: scsi_nvme_suite 00:06:21.523 Test: scsi_nvme_translate_test ...passed 00:06:21.523 00:06:21.523 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.523 suites 1 1 n/a 0 0 00:06:21.523 tests 1 1 1 0 0 00:06:21.523 asserts 104 104 104 0 n/a 00:06:21.523 00:06:21.523 Elapsed time = 0.000 seconds 00:06:21.523 22:14:20 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:21.523 00:06:21.523 00:06:21.523 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.523 http://cunit.sourceforge.net/ 00:06:21.523 00:06:21.523 00:06:21.523 Suite: lvol 00:06:21.523 Test: ut_lvs_init ...[2024-07-14 22:14:20.302094] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:21.523 passed 00:06:21.523 Test: ut_lvol_init ...[2024-07-14 22:14:20.302621] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:21.523 passed 00:06:21.523 Test: ut_lvol_snapshot ...passed 00:06:21.523 Test: ut_lvol_clone ...passed 00:06:21.523 Test: ut_lvs_destroy ...passed 00:06:21.523 Test: ut_lvs_unload ...passed 00:06:21.523 Test: ut_lvol_resize ...passed 00:06:21.523 Test: ut_lvol_set_read_only ...[2024-07-14 22:14:20.304416] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:21.523 passed 00:06:21.523 Test: ut_lvol_hotremove ...passed 00:06:21.523 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:21.523 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:21.523 Test: ut_lvol_read_write ...passed 00:06:21.523 Test: ut_vbdev_lvol_submit_request ...passed 00:06:21.523 Test: ut_lvol_examine_config ...passed 00:06:21.523 Test: ut_lvol_examine_disk ...[2024-07-14 22:14:20.305046] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:21.523 passed 00:06:21.523 Test: ut_lvol_rename ...passed 00:06:21.523 Test: ut_bdev_finish ...[2024-07-14 22:14:20.305897] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:21.523 [2024-07-14 22:14:20.305981] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:21.523 passed 00:06:21.523 Test: ut_lvs_rename ...passed 00:06:21.523 Test: ut_lvol_seek ...passed 00:06:21.523 Test: ut_esnap_dev_create ...[2024-07-14 22:14:20.306590] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:21.523 [2024-07-14 22:14:20.306653] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:21.523 [2024-07-14 22:14:20.306676] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:21.523 passed 00:06:21.523 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-14 22:14:20.306717] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:21.523 [2024-07-14 22:14:20.306840] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:21.523 [2024-07-14 22:14:20.306875] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:21.523 passed 00:06:21.523 00:06:21.523 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.523 suites 1 1 n/a 0 0 00:06:21.523 tests 21 21 21 0 0 00:06:21.523 asserts 712 712 712 0 n/a 00:06:21.523 00:06:21.523 Elapsed time = 0.005 seconds 00:06:21.523 22:14:20 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:21.523 00:06:21.523 00:06:21.523 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.523 http://cunit.sourceforge.net/ 00:06:21.523 00:06:21.523 00:06:21.523 Suite: zone_block 00:06:21.523 Test: test_zone_block_create ...passed 00:06:21.523 Test: test_zone_block_create_invalid ...[2024-07-14 22:14:20.368194] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:21.523 [2024-07-14 22:14:20.368507] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-14 22:14:20.368680] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:21.523 [2024-07-14 22:14:20.368735] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-14 22:14:20.368888] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:21.523 [2024-07-14 22:14:20.368920] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-14 22:14:20.369022] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:21.523 [2024-07-14 22:14:20.369067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:21.523 Test: test_get_zone_info ...[2024-07-14 22:14:20.369599] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 [2024-07-14 22:14:20.369658] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 [2024-07-14 22:14:20.369711] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 passed 00:06:21.523 Test: test_supported_io_types ...passed 00:06:21.523 Test: test_reset_zone ...[2024-07-14 22:14:20.370560] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 [2024-07-14 22:14:20.370625] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 passed 00:06:21.523 Test: test_open_zone ...[2024-07-14 22:14:20.371060] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 [2024-07-14 22:14:20.371792] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.523 passed 00:06:21.524 Test: test_zone_write ...[2024-07-14 22:14:20.371859] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.372352] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:21.524 [2024-07-14 22:14:20.372405] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.372466] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:21.524 [2024-07-14 22:14:20.372508] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.378146] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:21.524 [2024-07-14 22:14:20.378201] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.378282] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:21.524 [2024-07-14 22:14:20.378306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.384001] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:21.524 [2024-07-14 22:14:20.384080] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 passed 00:06:21.524 Test: test_zone_read ...[2024-07-14 22:14:20.384554] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:21.524 [2024-07-14 22:14:20.384596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.384663] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:21.524 [2024-07-14 22:14:20.384691] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.385169] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:21.524 [2024-07-14 22:14:20.385210] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 passed 00:06:21.524 Test: test_close_zone ...[2024-07-14 22:14:20.385573] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.385650] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.385894] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.385942] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 passed 00:06:21.524 Test: test_finish_zone ...[2024-07-14 22:14:20.386595] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.386656] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 passed 00:06:21.524 Test: test_append_zone ...[2024-07-14 22:14:20.387026] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:21.524 [2024-07-14 22:14:20.387074] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.387126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:21.524 [2024-07-14 22:14:20.387150] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 [2024-07-14 22:14:20.398194] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:21.524 [2024-07-14 22:14:20.398273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:21.524 passed 00:06:21.524 00:06:21.524 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.524 suites 1 1 n/a 0 0 00:06:21.524 tests 11 11 11 0 0 00:06:21.524 asserts 3437 3437 3437 0 n/a 00:06:21.524 00:06:21.524 Elapsed time = 0.031 seconds 00:06:21.524 22:14:20 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:21.524 00:06:21.524 00:06:21.524 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.524 http://cunit.sourceforge.net/ 00:06:21.524 00:06:21.524 00:06:21.524 Suite: bdev 00:06:21.524 Test: basic ...[2024-07-14 22:14:20.500671] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x557dd6d5b401): Operation not permitted (rc=-1) 00:06:21.524 [2024-07-14 22:14:20.501060] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x557dd6d5b3c0): Operation not permitted (rc=-1) 00:06:21.524 [2024-07-14 22:14:20.501103] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x557dd6d5b401): Operation not permitted (rc=-1) 00:06:21.524 passed 00:06:21.806 Test: unregister_and_close ...passed 00:06:21.806 Test: unregister_and_close_different_threads ...passed 00:06:21.806 Test: basic_qos ...passed 00:06:21.806 Test: put_channel_during_reset ...passed 00:06:21.806 Test: aborted_reset ...passed 00:06:21.806 Test: aborted_reset_no_outstanding_io ...passed 00:06:22.064 Test: io_during_reset ...passed 00:06:22.064 Test: reset_completions ...passed 00:06:22.064 Test: io_during_qos_queue ...passed 00:06:22.064 Test: io_during_qos_reset ...passed 00:06:22.064 Test: enomem ...passed 00:06:22.323 Test: enomem_multi_bdev ...passed 00:06:22.323 Test: enomem_multi_bdev_unregister ...passed 00:06:22.323 Test: enomem_multi_io_target ...passed 00:06:22.323 Test: qos_dynamic_enable ...passed 00:06:22.323 Test: bdev_histograms_mt ...passed 00:06:22.582 Test: bdev_set_io_timeout_mt ...[2024-07-14 22:14:21.395110] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:22.582 passed 00:06:22.582 Test: lock_lba_range_then_submit_io ...[2024-07-14 22:14:21.416778] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x557dd6d5b380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:22.582 passed 00:06:22.582 Test: unregister_during_reset ...passed 00:06:22.582 Test: event_notify_and_close ...passed 00:06:22.582 Test: unregister_and_qos_poller ...passed 00:06:22.582 Suite: bdev_wrong_thread 00:06:22.582 Test: spdk_bdev_register_wt ...[2024-07-14 22:14:21.585810] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:22.582 passed 00:06:22.582 Test: spdk_bdev_examine_wt ...[2024-07-14 22:14:21.586180] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:22.582 passed 00:06:22.582 00:06:22.582 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.582 suites 2 2 n/a 0 0 00:06:22.582 tests 24 24 24 0 0 00:06:22.582 asserts 621 621 621 0 n/a 00:06:22.582 00:06:22.582 Elapsed time = 1.117 seconds 00:06:22.582 00:06:22.582 real 0m4.584s 00:06:22.582 user 0m2.061s 00:06:22.582 sys 0m2.522s 00:06:22.582 22:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.582 ************************************ 00:06:22.582 END TEST unittest_bdev 00:06:22.582 ************************************ 00:06:22.582 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:22.841 22:14:21 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:22.841 22:14:21 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:22.841 22:14:21 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:22.841 22:14:21 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:22.841 22:14:21 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:22.841 22:14:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.841 22:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.841 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:22.841 ************************************ 00:06:22.841 START TEST unittest_bdev_raid5f 00:06:22.841 ************************************ 00:06:22.841 22:14:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:22.841 00:06:22.841 00:06:22.841 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.841 http://cunit.sourceforge.net/ 00:06:22.841 00:06:22.841 00:06:22.841 Suite: raid5f 00:06:22.841 Test: test_raid5f_start ...passed 00:06:23.409 Test: test_raid5f_submit_read_request ...passed 00:06:23.409 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:27.609 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:42.491 Test: test_raid5f_chunk_write_error ...passed 00:06:50.606 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:53.137 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:19.682 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:19.682 00:07:19.682 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.682 suites 1 1 n/a 0 0 00:07:19.682 tests 8 8 8 0 0 00:07:19.682 asserts 351864 351864 351864 0 n/a 00:07:19.682 00:07:19.682 Elapsed time = 55.805 seconds 00:07:19.682 00:07:19.682 real 0m55.898s 00:07:19.682 user 0m53.268s 00:07:19.682 sys 0m2.603s 00:07:19.682 22:15:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.682 22:15:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.682 ************************************ 00:07:19.682 END TEST unittest_bdev_raid5f 00:07:19.682 ************************************ 00:07:19.682 22:15:17 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:19.682 22:15:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.682 22:15:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.682 22:15:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.682 ************************************ 00:07:19.682 START TEST unittest_blob_blobfs 00:07:19.682 ************************************ 00:07:19.682 22:15:17 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:19.682 22:15:17 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:19.682 22:15:17 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:19.682 00:07:19.682 00:07:19.682 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.682 http://cunit.sourceforge.net/ 00:07:19.682 00:07:19.682 00:07:19.682 Suite: blob_nocopy_noextent 00:07:19.682 Test: blob_init ...[2024-07-14 22:15:17.655961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:19.682 passed 00:07:19.682 Test: blob_thin_provision ...passed 00:07:19.682 Test: blob_read_only ...passed 00:07:19.682 Test: bs_load ...[2024-07-14 22:15:17.751550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:19.682 passed 00:07:19.682 Test: bs_load_custom_cluster_size ...passed 00:07:19.682 Test: bs_load_after_failed_grow ...passed 00:07:19.682 Test: bs_cluster_sz ...[2024-07-14 22:15:17.782754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:19.682 [2024-07-14 22:15:17.783439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:19.682 [2024-07-14 22:15:17.783746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:19.682 passed 00:07:19.682 Test: bs_resize_md ...passed 00:07:19.682 Test: bs_destroy ...passed 00:07:19.682 Test: bs_type ...passed 00:07:19.682 Test: bs_super_block ...passed 00:07:19.682 Test: bs_test_recover_cluster_count ...passed 00:07:19.682 Test: bs_grow_live ...passed 00:07:19.682 Test: bs_grow_live_no_space ...passed 00:07:19.682 Test: bs_test_grow ...passed 00:07:19.682 Test: blob_serialize_test ...passed 00:07:19.682 Test: super_block_crc ...passed 00:07:19.682 Test: blob_thin_prov_write_count_io ...passed 00:07:19.682 Test: bs_load_iter_test ...passed 00:07:19.682 Test: blob_relations ...[2024-07-14 22:15:17.948297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.682 [2024-07-14 22:15:17.948658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:17.949763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.682 [2024-07-14 22:15:17.949951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 passed 00:07:19.682 Test: blob_relations2 ...[2024-07-14 22:15:17.965354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.682 [2024-07-14 22:15:17.965603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:17.965759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.682 [2024-07-14 22:15:17.965879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:17.967497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.682 [2024-07-14 22:15:17.967685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:17.968322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.682 [2024-07-14 22:15:17.968461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 passed 00:07:19.682 Test: blob_relations3 ...passed 00:07:19.682 Test: blobstore_clean_power_failure ...passed 00:07:19.682 Test: blob_delete_snapshot_power_failure ...[2024-07-14 22:15:18.135678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:19.682 [2024-07-14 22:15:18.148758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:19.682 [2024-07-14 22:15:18.149108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:19.682 [2024-07-14 22:15:18.149316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:18.161890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:19.682 [2024-07-14 22:15:18.162108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:19.682 [2024-07-14 22:15:18.162278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:19.682 [2024-07-14 22:15:18.162413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:18.175148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:19.682 [2024-07-14 22:15:18.175433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.682 [2024-07-14 22:15:18.188965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:19.682 [2024-07-14 22:15:18.189258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.683 [2024-07-14 22:15:18.202089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:19.683 [2024-07-14 22:15:18.202372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.683 passed 00:07:19.683 Test: blob_create_snapshot_power_failure ...[2024-07-14 22:15:18.239986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:19.683 [2024-07-14 22:15:18.265452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:19.683 [2024-07-14 22:15:18.278309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:19.683 passed 00:07:19.683 Test: blob_io_unit ...passed 00:07:19.683 Test: blob_io_unit_compatibility ...passed 00:07:19.683 Test: blob_ext_md_pages ...passed 00:07:19.683 Test: blob_esnap_io_4096_4096 ...passed 00:07:19.683 Test: blob_esnap_io_512_512 ...passed 00:07:19.683 Test: blob_esnap_io_4096_512 ...passed 00:07:19.683 Test: blob_esnap_io_512_4096 ...passed 00:07:19.683 Suite: blob_bs_nocopy_noextent 00:07:19.683 Test: blob_open ...passed 00:07:19.683 Test: blob_create ...[2024-07-14 22:15:18.533817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:19.683 passed 00:07:19.683 Test: blob_create_loop ...passed 00:07:19.683 Test: blob_create_fail ...[2024-07-14 22:15:18.638395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:19.683 passed 00:07:19.683 Test: blob_create_internal ...passed 00:07:19.683 Test: blob_create_zero_extent ...passed 00:07:19.942 Test: blob_snapshot ...passed 00:07:19.942 Test: blob_clone ...passed 00:07:19.942 Test: blob_inflate ...[2024-07-14 22:15:18.842207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:19.942 passed 00:07:19.942 Test: blob_delete ...passed 00:07:19.942 Test: blob_resize_test ...[2024-07-14 22:15:18.917090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:19.942 passed 00:07:19.942 Test: channel_ops ...passed 00:07:20.200 Test: blob_super ...passed 00:07:20.200 Test: blob_rw_verify_iov ...passed 00:07:20.200 Test: blob_unmap ...passed 00:07:20.200 Test: blob_iter ...passed 00:07:20.201 Test: blob_parse_md ...passed 00:07:20.201 Test: bs_load_pending_removal ...passed 00:07:20.201 Test: bs_unload ...[2024-07-14 22:15:19.212655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:20.201 passed 00:07:20.459 Test: bs_usable_clusters ...passed 00:07:20.459 Test: blob_crc ...[2024-07-14 22:15:19.285246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:20.459 [2024-07-14 22:15:19.285657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:20.459 passed 00:07:20.459 Test: blob_flags ...passed 00:07:20.459 Test: bs_version ...passed 00:07:20.459 Test: blob_set_xattrs_test ...[2024-07-14 22:15:19.397042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:20.459 [2024-07-14 22:15:19.397459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:20.459 passed 00:07:20.728 Test: blob_thin_prov_alloc ...passed 00:07:20.728 Test: blob_insert_cluster_msg_test ...passed 00:07:20.728 Test: blob_thin_prov_rw ...passed 00:07:20.728 Test: blob_thin_prov_rle ...passed 00:07:20.728 Test: blob_thin_prov_rw_iov ...passed 00:07:20.728 Test: blob_snapshot_rw ...passed 00:07:20.728 Test: blob_snapshot_rw_iov ...passed 00:07:20.990 Test: blob_inflate_rw ...passed 00:07:21.248 Test: blob_snapshot_freeze_io ...passed 00:07:21.248 Test: blob_operation_split_rw ...passed 00:07:21.507 Test: blob_operation_split_rw_iov ...passed 00:07:21.507 Test: blob_simultaneous_operations ...[2024-07-14 22:15:20.441360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:21.507 [2024-07-14 22:15:20.441536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:21.507 [2024-07-14 22:15:20.442678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:21.507 [2024-07-14 22:15:20.442773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:21.507 [2024-07-14 22:15:20.454282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:21.507 [2024-07-14 22:15:20.454354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:21.507 [2024-07-14 22:15:20.454498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:21.507 [2024-07-14 22:15:20.454555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:21.507 passed 00:07:21.507 Test: blob_persist_test ...passed 00:07:21.766 Test: blob_decouple_snapshot ...passed 00:07:21.766 Test: blob_seek_io_unit ...passed 00:07:21.766 Test: blob_nested_freezes ...passed 00:07:21.766 Suite: blob_blob_nocopy_noextent 00:07:21.766 Test: blob_write ...passed 00:07:21.766 Test: blob_read ...passed 00:07:21.766 Test: blob_rw_verify ...passed 00:07:21.766 Test: blob_rw_verify_iov_nomem ...passed 00:07:22.051 Test: blob_rw_iov_read_only ...passed 00:07:22.051 Test: blob_xattr ...passed 00:07:22.051 Test: blob_dirty_shutdown ...passed 00:07:22.051 Test: blob_is_degraded ...passed 00:07:22.051 Suite: blob_esnap_bs_nocopy_noextent 00:07:22.051 Test: blob_esnap_create ...passed 00:07:22.051 Test: blob_esnap_thread_add_remove ...passed 00:07:22.051 Test: blob_esnap_clone_snapshot ...passed 00:07:22.317 Test: blob_esnap_clone_inflate ...passed 00:07:22.317 Test: blob_esnap_clone_decouple ...passed 00:07:22.317 Test: blob_esnap_clone_reload ...passed 00:07:22.317 Test: blob_esnap_hotplug ...passed 00:07:22.317 Suite: blob_nocopy_extent 00:07:22.317 Test: blob_init ...[2024-07-14 22:15:21.226931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:22.317 passed 00:07:22.317 Test: blob_thin_provision ...passed 00:07:22.317 Test: blob_read_only ...passed 00:07:22.317 Test: bs_load ...[2024-07-14 22:15:21.277744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:22.317 passed 00:07:22.317 Test: bs_load_custom_cluster_size ...passed 00:07:22.317 Test: bs_load_after_failed_grow ...passed 00:07:22.317 Test: bs_cluster_sz ...[2024-07-14 22:15:21.305943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:22.318 [2024-07-14 22:15:21.306258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:22.318 [2024-07-14 22:15:21.306315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:22.318 passed 00:07:22.318 Test: bs_resize_md ...passed 00:07:22.318 Test: bs_destroy ...passed 00:07:22.318 Test: bs_type ...passed 00:07:22.577 Test: bs_super_block ...passed 00:07:22.577 Test: bs_test_recover_cluster_count ...passed 00:07:22.577 Test: bs_grow_live ...passed 00:07:22.577 Test: bs_grow_live_no_space ...passed 00:07:22.577 Test: bs_test_grow ...passed 00:07:22.577 Test: blob_serialize_test ...passed 00:07:22.577 Test: super_block_crc ...passed 00:07:22.577 Test: blob_thin_prov_write_count_io ...passed 00:07:22.577 Test: bs_load_iter_test ...passed 00:07:22.577 Test: blob_relations ...[2024-07-14 22:15:21.475843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:22.577 [2024-07-14 22:15:21.475986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.577 [2024-07-14 22:15:21.476994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:22.577 [2024-07-14 22:15:21.477076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.577 passed 00:07:22.577 Test: blob_relations2 ...[2024-07-14 22:15:21.492915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:22.577 [2024-07-14 22:15:21.493013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.577 [2024-07-14 22:15:21.493047] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:22.577 [2024-07-14 22:15:21.493080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.577 [2024-07-14 22:15:21.494555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:22.577 [2024-07-14 22:15:21.494660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.577 [2024-07-14 22:15:21.495076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:22.577 [2024-07-14 22:15:21.495134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.577 passed 00:07:22.577 Test: blob_relations3 ...passed 00:07:22.837 Test: blobstore_clean_power_failure ...passed 00:07:22.837 Test: blob_delete_snapshot_power_failure ...[2024-07-14 22:15:21.672598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:22.837 [2024-07-14 22:15:21.686430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:22.837 [2024-07-14 22:15:21.700423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:22.837 [2024-07-14 22:15:21.700516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:22.837 [2024-07-14 22:15:21.700553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.837 [2024-07-14 22:15:21.714802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:22.837 [2024-07-14 22:15:21.714919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:22.837 [2024-07-14 22:15:21.714971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:22.837 [2024-07-14 22:15:21.715008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.837 [2024-07-14 22:15:21.728911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:22.837 [2024-07-14 22:15:21.729031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:22.837 [2024-07-14 22:15:21.729074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:22.837 [2024-07-14 22:15:21.729120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.837 [2024-07-14 22:15:21.743454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:22.837 [2024-07-14 22:15:21.743613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.837 [2024-07-14 22:15:21.758052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:22.837 [2024-07-14 22:15:21.758207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.837 [2024-07-14 22:15:21.772651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:22.837 [2024-07-14 22:15:21.772811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:22.837 passed 00:07:22.837 Test: blob_create_snapshot_power_failure ...[2024-07-14 22:15:21.815988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:22.837 [2024-07-14 22:15:21.830135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:22.837 [2024-07-14 22:15:21.858101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:22.837 [2024-07-14 22:15:21.873389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:23.096 passed 00:07:23.096 Test: blob_io_unit ...passed 00:07:23.096 Test: blob_io_unit_compatibility ...passed 00:07:23.096 Test: blob_ext_md_pages ...passed 00:07:23.096 Test: blob_esnap_io_4096_4096 ...passed 00:07:23.096 Test: blob_esnap_io_512_512 ...passed 00:07:23.096 Test: blob_esnap_io_4096_512 ...passed 00:07:23.096 Test: blob_esnap_io_512_4096 ...passed 00:07:23.096 Suite: blob_bs_nocopy_extent 00:07:23.096 Test: blob_open ...passed 00:07:23.096 Test: blob_create ...[2024-07-14 22:15:22.144700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:23.355 passed 00:07:23.355 Test: blob_create_loop ...passed 00:07:23.355 Test: blob_create_fail ...[2024-07-14 22:15:22.258366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:23.355 passed 00:07:23.355 Test: blob_create_internal ...passed 00:07:23.355 Test: blob_create_zero_extent ...passed 00:07:23.355 Test: blob_snapshot ...passed 00:07:23.613 Test: blob_clone ...passed 00:07:23.613 Test: blob_inflate ...[2024-07-14 22:15:22.458869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:23.613 passed 00:07:23.613 Test: blob_delete ...passed 00:07:23.613 Test: blob_resize_test ...[2024-07-14 22:15:22.529968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:23.613 passed 00:07:23.613 Test: channel_ops ...passed 00:07:23.614 Test: blob_super ...passed 00:07:23.614 Test: blob_rw_verify_iov ...passed 00:07:23.872 Test: blob_unmap ...passed 00:07:23.872 Test: blob_iter ...passed 00:07:23.872 Test: blob_parse_md ...passed 00:07:23.872 Test: bs_load_pending_removal ...passed 00:07:23.872 Test: bs_unload ...[2024-07-14 22:15:22.830775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:23.872 passed 00:07:23.872 Test: bs_usable_clusters ...passed 00:07:23.872 Test: blob_crc ...[2024-07-14 22:15:22.906254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:23.872 [2024-07-14 22:15:22.906421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:23.872 passed 00:07:24.131 Test: blob_flags ...passed 00:07:24.131 Test: bs_version ...passed 00:07:24.131 Test: blob_set_xattrs_test ...[2024-07-14 22:15:23.011957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:24.131 [2024-07-14 22:15:23.012096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:24.131 passed 00:07:24.131 Test: blob_thin_prov_alloc ...passed 00:07:24.131 Test: blob_insert_cluster_msg_test ...passed 00:07:24.389 Test: blob_thin_prov_rw ...passed 00:07:24.389 Test: blob_thin_prov_rle ...passed 00:07:24.389 Test: blob_thin_prov_rw_iov ...passed 00:07:24.389 Test: blob_snapshot_rw ...passed 00:07:24.389 Test: blob_snapshot_rw_iov ...passed 00:07:24.647 Test: blob_inflate_rw ...passed 00:07:24.647 Test: blob_snapshot_freeze_io ...passed 00:07:24.906 Test: blob_operation_split_rw ...passed 00:07:24.906 Test: blob_operation_split_rw_iov ...passed 00:07:24.906 Test: blob_simultaneous_operations ...[2024-07-14 22:15:23.874980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.906 [2024-07-14 22:15:23.875092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.906 [2024-07-14 22:15:23.876277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.906 [2024-07-14 22:15:23.876340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.906 [2024-07-14 22:15:23.886702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.906 [2024-07-14 22:15:23.886780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.906 [2024-07-14 22:15:23.886900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.906 [2024-07-14 22:15:23.886931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.906 passed 00:07:24.906 Test: blob_persist_test ...passed 00:07:25.163 Test: blob_decouple_snapshot ...passed 00:07:25.163 Test: blob_seek_io_unit ...passed 00:07:25.163 Test: blob_nested_freezes ...passed 00:07:25.163 Suite: blob_blob_nocopy_extent 00:07:25.163 Test: blob_write ...passed 00:07:25.163 Test: blob_read ...passed 00:07:25.163 Test: blob_rw_verify ...passed 00:07:25.163 Test: blob_rw_verify_iov_nomem ...passed 00:07:25.419 Test: blob_rw_iov_read_only ...passed 00:07:25.419 Test: blob_xattr ...passed 00:07:25.419 Test: blob_dirty_shutdown ...passed 00:07:25.419 Test: blob_is_degraded ...passed 00:07:25.419 Suite: blob_esnap_bs_nocopy_extent 00:07:25.419 Test: blob_esnap_create ...passed 00:07:25.419 Test: blob_esnap_thread_add_remove ...passed 00:07:25.419 Test: blob_esnap_clone_snapshot ...passed 00:07:25.676 Test: blob_esnap_clone_inflate ...passed 00:07:25.676 Test: blob_esnap_clone_decouple ...passed 00:07:25.676 Test: blob_esnap_clone_reload ...passed 00:07:25.676 Test: blob_esnap_hotplug ...passed 00:07:25.676 Suite: blob_copy_noextent 00:07:25.676 Test: blob_init ...[2024-07-14 22:15:24.592627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:25.676 passed 00:07:25.676 Test: blob_thin_provision ...passed 00:07:25.676 Test: blob_read_only ...passed 00:07:25.676 Test: bs_load ...[2024-07-14 22:15:24.639563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:25.676 passed 00:07:25.676 Test: bs_load_custom_cluster_size ...passed 00:07:25.676 Test: bs_load_after_failed_grow ...passed 00:07:25.676 Test: bs_cluster_sz ...[2024-07-14 22:15:24.664875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:25.676 [2024-07-14 22:15:24.665093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:25.676 [2024-07-14 22:15:24.665158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:25.676 passed 00:07:25.676 Test: bs_resize_md ...passed 00:07:25.676 Test: bs_destroy ...passed 00:07:25.676 Test: bs_type ...passed 00:07:25.934 Test: bs_super_block ...passed 00:07:25.934 Test: bs_test_recover_cluster_count ...passed 00:07:25.934 Test: bs_grow_live ...passed 00:07:25.934 Test: bs_grow_live_no_space ...passed 00:07:25.934 Test: bs_test_grow ...passed 00:07:25.934 Test: blob_serialize_test ...passed 00:07:25.934 Test: super_block_crc ...passed 00:07:25.934 Test: blob_thin_prov_write_count_io ...passed 00:07:25.934 Test: bs_load_iter_test ...passed 00:07:25.934 Test: blob_relations ...[2024-07-14 22:15:24.821966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.934 [2024-07-14 22:15:24.822073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.934 [2024-07-14 22:15:24.822657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.934 [2024-07-14 22:15:24.822703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.934 passed 00:07:25.934 Test: blob_relations2 ...[2024-07-14 22:15:24.836636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.934 [2024-07-14 22:15:24.836733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.934 [2024-07-14 22:15:24.836763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.934 [2024-07-14 22:15:24.836779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.934 [2024-07-14 22:15:24.837767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.934 [2024-07-14 22:15:24.837833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.934 [2024-07-14 22:15:24.838183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.934 [2024-07-14 22:15:24.838243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.934 passed 00:07:25.934 Test: blob_relations3 ...passed 00:07:25.934 Test: blobstore_clean_power_failure ...passed 00:07:26.192 Test: blob_delete_snapshot_power_failure ...[2024-07-14 22:15:24.997631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:26.192 [2024-07-14 22:15:25.010088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:26.192 [2024-07-14 22:15:25.010188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:26.192 [2024-07-14 22:15:25.010219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:26.192 [2024-07-14 22:15:25.022797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:26.192 [2024-07-14 22:15:25.022883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:26.192 [2024-07-14 22:15:25.022918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:26.192 [2024-07-14 22:15:25.022951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:26.192 [2024-07-14 22:15:25.035518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:26.192 [2024-07-14 22:15:25.035637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:26.192 [2024-07-14 22:15:25.048239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:26.192 [2024-07-14 22:15:25.048362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:26.192 [2024-07-14 22:15:25.061087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:26.192 [2024-07-14 22:15:25.061193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:26.192 passed 00:07:26.192 Test: blob_create_snapshot_power_failure ...[2024-07-14 22:15:25.098414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:26.192 [2024-07-14 22:15:25.123167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:26.192 [2024-07-14 22:15:25.135507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:26.192 passed 00:07:26.192 Test: blob_io_unit ...passed 00:07:26.192 Test: blob_io_unit_compatibility ...passed 00:07:26.192 Test: blob_ext_md_pages ...passed 00:07:26.192 Test: blob_esnap_io_4096_4096 ...passed 00:07:26.449 Test: blob_esnap_io_512_512 ...passed 00:07:26.449 Test: blob_esnap_io_4096_512 ...passed 00:07:26.449 Test: blob_esnap_io_512_4096 ...passed 00:07:26.449 Suite: blob_bs_copy_noextent 00:07:26.449 Test: blob_open ...passed 00:07:26.449 Test: blob_create ...[2024-07-14 22:15:25.376054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:26.449 passed 00:07:26.449 Test: blob_create_loop ...passed 00:07:26.449 Test: blob_create_fail ...[2024-07-14 22:15:25.468349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:26.449 passed 00:07:26.706 Test: blob_create_internal ...passed 00:07:26.706 Test: blob_create_zero_extent ...passed 00:07:26.706 Test: blob_snapshot ...passed 00:07:26.706 Test: blob_clone ...passed 00:07:26.706 Test: blob_inflate ...[2024-07-14 22:15:25.633733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:26.706 passed 00:07:26.706 Test: blob_delete ...passed 00:07:26.706 Test: blob_resize_test ...[2024-07-14 22:15:25.695227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:26.706 passed 00:07:26.706 Test: channel_ops ...passed 00:07:26.965 Test: blob_super ...passed 00:07:26.965 Test: blob_rw_verify_iov ...passed 00:07:26.965 Test: blob_unmap ...passed 00:07:26.965 Test: blob_iter ...passed 00:07:26.965 Test: blob_parse_md ...passed 00:07:26.965 Test: bs_load_pending_removal ...passed 00:07:26.965 Test: bs_unload ...[2024-07-14 22:15:25.951234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:26.965 passed 00:07:26.965 Test: bs_usable_clusters ...passed 00:07:26.965 Test: blob_crc ...[2024-07-14 22:15:26.021941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:26.965 [2024-07-14 22:15:26.022081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:27.223 passed 00:07:27.223 Test: blob_flags ...passed 00:07:27.223 Test: bs_version ...passed 00:07:27.223 Test: blob_set_xattrs_test ...[2024-07-14 22:15:26.119968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:27.223 [2024-07-14 22:15:26.120099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:27.223 passed 00:07:27.223 Test: blob_thin_prov_alloc ...passed 00:07:27.481 Test: blob_insert_cluster_msg_test ...passed 00:07:27.481 Test: blob_thin_prov_rw ...passed 00:07:27.481 Test: blob_thin_prov_rle ...passed 00:07:27.481 Test: blob_thin_prov_rw_iov ...passed 00:07:27.481 Test: blob_snapshot_rw ...passed 00:07:27.481 Test: blob_snapshot_rw_iov ...passed 00:07:27.739 Test: blob_inflate_rw ...passed 00:07:27.739 Test: blob_snapshot_freeze_io ...passed 00:07:27.996 Test: blob_operation_split_rw ...passed 00:07:27.997 Test: blob_operation_split_rw_iov ...passed 00:07:27.997 Test: blob_simultaneous_operations ...[2024-07-14 22:15:26.974317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.997 [2024-07-14 22:15:26.974428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.997 [2024-07-14 22:15:26.974910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.997 [2024-07-14 22:15:26.974939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.997 [2024-07-14 22:15:26.977562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.997 [2024-07-14 22:15:26.977611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.997 [2024-07-14 22:15:26.977707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.997 [2024-07-14 22:15:26.977730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.997 passed 00:07:27.997 Test: blob_persist_test ...passed 00:07:28.254 Test: blob_decouple_snapshot ...passed 00:07:28.254 Test: blob_seek_io_unit ...passed 00:07:28.254 Test: blob_nested_freezes ...passed 00:07:28.254 Suite: blob_blob_copy_noextent 00:07:28.254 Test: blob_write ...passed 00:07:28.254 Test: blob_read ...passed 00:07:28.254 Test: blob_rw_verify ...passed 00:07:28.254 Test: blob_rw_verify_iov_nomem ...passed 00:07:28.254 Test: blob_rw_iov_read_only ...passed 00:07:28.512 Test: blob_xattr ...passed 00:07:28.512 Test: blob_dirty_shutdown ...passed 00:07:28.512 Test: blob_is_degraded ...passed 00:07:28.512 Suite: blob_esnap_bs_copy_noextent 00:07:28.512 Test: blob_esnap_create ...passed 00:07:28.512 Test: blob_esnap_thread_add_remove ...passed 00:07:28.512 Test: blob_esnap_clone_snapshot ...passed 00:07:28.512 Test: blob_esnap_clone_inflate ...passed 00:07:28.512 Test: blob_esnap_clone_decouple ...passed 00:07:28.770 Test: blob_esnap_clone_reload ...passed 00:07:28.770 Test: blob_esnap_hotplug ...passed 00:07:28.770 Suite: blob_copy_extent 00:07:28.770 Test: blob_init ...[2024-07-14 22:15:27.615070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:28.770 passed 00:07:28.770 Test: blob_thin_provision ...passed 00:07:28.770 Test: blob_read_only ...passed 00:07:28.770 Test: bs_load ...[2024-07-14 22:15:27.658467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:28.770 passed 00:07:28.770 Test: bs_load_custom_cluster_size ...passed 00:07:28.770 Test: bs_load_after_failed_grow ...passed 00:07:28.770 Test: bs_cluster_sz ...[2024-07-14 22:15:27.682551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:28.770 [2024-07-14 22:15:27.682754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:28.770 [2024-07-14 22:15:27.682796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:28.770 passed 00:07:28.770 Test: bs_resize_md ...passed 00:07:28.770 Test: bs_destroy ...passed 00:07:28.770 Test: bs_type ...passed 00:07:28.770 Test: bs_super_block ...passed 00:07:28.770 Test: bs_test_recover_cluster_count ...passed 00:07:28.770 Test: bs_grow_live ...passed 00:07:28.770 Test: bs_grow_live_no_space ...passed 00:07:28.770 Test: bs_test_grow ...passed 00:07:28.770 Test: blob_serialize_test ...passed 00:07:28.770 Test: super_block_crc ...passed 00:07:28.770 Test: blob_thin_prov_write_count_io ...passed 00:07:28.770 Test: bs_load_iter_test ...passed 00:07:28.770 Test: blob_relations ...[2024-07-14 22:15:27.823632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.770 [2024-07-14 22:15:27.823755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.770 [2024-07-14 22:15:27.824687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.770 [2024-07-14 22:15:27.824757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.770 passed 00:07:29.028 Test: blob_relations2 ...[2024-07-14 22:15:27.838371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.028 [2024-07-14 22:15:27.838452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:27.838509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.028 [2024-07-14 22:15:27.838533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:27.839957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.028 [2024-07-14 22:15:27.840026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:27.840466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.028 [2024-07-14 22:15:27.840516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 passed 00:07:29.028 Test: blob_relations3 ...passed 00:07:29.028 Test: blobstore_clean_power_failure ...passed 00:07:29.028 Test: blob_delete_snapshot_power_failure ...[2024-07-14 22:15:27.994241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:29.028 [2024-07-14 22:15:28.006753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:29.028 [2024-07-14 22:15:28.019259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:29.028 [2024-07-14 22:15:28.019353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.028 [2024-07-14 22:15:28.019399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:28.035332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:29.028 [2024-07-14 22:15:28.035425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:29.028 [2024-07-14 22:15:28.035461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.028 [2024-07-14 22:15:28.035483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:28.049372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:29.028 [2024-07-14 22:15:28.049450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:29.028 [2024-07-14 22:15:28.049488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.028 [2024-07-14 22:15:28.049509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:28.062610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:29.028 [2024-07-14 22:15:28.062763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.028 [2024-07-14 22:15:28.075711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:29.028 [2024-07-14 22:15:28.075858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.286 [2024-07-14 22:15:28.088824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:29.286 [2024-07-14 22:15:28.088925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.286 passed 00:07:29.286 Test: blob_create_snapshot_power_failure ...[2024-07-14 22:15:28.125231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:29.286 [2024-07-14 22:15:28.136465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:29.286 [2024-07-14 22:15:28.158891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:29.286 [2024-07-14 22:15:28.170659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:29.286 passed 00:07:29.286 Test: blob_io_unit ...passed 00:07:29.286 Test: blob_io_unit_compatibility ...passed 00:07:29.286 Test: blob_ext_md_pages ...passed 00:07:29.286 Test: blob_esnap_io_4096_4096 ...passed 00:07:29.286 Test: blob_esnap_io_512_512 ...passed 00:07:29.286 Test: blob_esnap_io_4096_512 ...passed 00:07:29.286 Test: blob_esnap_io_512_4096 ...passed 00:07:29.286 Suite: blob_bs_copy_extent 00:07:29.543 Test: blob_open ...passed 00:07:29.543 Test: blob_create ...[2024-07-14 22:15:28.393644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:29.543 passed 00:07:29.543 Test: blob_create_loop ...passed 00:07:29.543 Test: blob_create_fail ...[2024-07-14 22:15:28.487439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:29.543 passed 00:07:29.543 Test: blob_create_internal ...passed 00:07:29.543 Test: blob_create_zero_extent ...passed 00:07:29.543 Test: blob_snapshot ...passed 00:07:29.801 Test: blob_clone ...passed 00:07:29.801 Test: blob_inflate ...[2024-07-14 22:15:28.653616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:29.801 passed 00:07:29.801 Test: blob_delete ...passed 00:07:29.801 Test: blob_resize_test ...[2024-07-14 22:15:28.713093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:29.801 passed 00:07:29.801 Test: channel_ops ...passed 00:07:29.801 Test: blob_super ...passed 00:07:29.801 Test: blob_rw_verify_iov ...passed 00:07:29.801 Test: blob_unmap ...passed 00:07:30.058 Test: blob_iter ...passed 00:07:30.058 Test: blob_parse_md ...passed 00:07:30.058 Test: bs_load_pending_removal ...passed 00:07:30.058 Test: bs_unload ...[2024-07-14 22:15:28.960320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:30.058 passed 00:07:30.058 Test: bs_usable_clusters ...passed 00:07:30.058 Test: blob_crc ...[2024-07-14 22:15:29.021650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:30.058 [2024-07-14 22:15:29.021794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:30.058 passed 00:07:30.058 Test: blob_flags ...passed 00:07:30.058 Test: bs_version ...passed 00:07:30.317 Test: blob_set_xattrs_test ...[2024-07-14 22:15:29.118647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:30.317 [2024-07-14 22:15:29.118818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:30.317 passed 00:07:30.317 Test: blob_thin_prov_alloc ...passed 00:07:30.317 Test: blob_insert_cluster_msg_test ...passed 00:07:30.317 Test: blob_thin_prov_rw ...passed 00:07:30.317 Test: blob_thin_prov_rle ...passed 00:07:30.317 Test: blob_thin_prov_rw_iov ...passed 00:07:30.575 Test: blob_snapshot_rw ...passed 00:07:30.575 Test: blob_snapshot_rw_iov ...passed 00:07:30.575 Test: blob_inflate_rw ...passed 00:07:30.833 Test: blob_snapshot_freeze_io ...passed 00:07:30.833 Test: blob_operation_split_rw ...passed 00:07:31.091 Test: blob_operation_split_rw_iov ...passed 00:07:31.091 Test: blob_simultaneous_operations ...[2024-07-14 22:15:29.935075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.091 [2024-07-14 22:15:29.935193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.091 [2024-07-14 22:15:29.935661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.091 [2024-07-14 22:15:29.935689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.091 [2024-07-14 22:15:29.938075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.091 [2024-07-14 22:15:29.938122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.091 [2024-07-14 22:15:29.938230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.091 [2024-07-14 22:15:29.938254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.091 passed 00:07:31.091 Test: blob_persist_test ...passed 00:07:31.091 Test: blob_decouple_snapshot ...passed 00:07:31.091 Test: blob_seek_io_unit ...passed 00:07:31.091 Test: blob_nested_freezes ...passed 00:07:31.091 Suite: blob_blob_copy_extent 00:07:31.091 Test: blob_write ...passed 00:07:31.349 Test: blob_read ...passed 00:07:31.349 Test: blob_rw_verify ...passed 00:07:31.349 Test: blob_rw_verify_iov_nomem ...passed 00:07:31.349 Test: blob_rw_iov_read_only ...passed 00:07:31.349 Test: blob_xattr ...passed 00:07:31.349 Test: blob_dirty_shutdown ...passed 00:07:31.349 Test: blob_is_degraded ...passed 00:07:31.349 Suite: blob_esnap_bs_copy_extent 00:07:31.349 Test: blob_esnap_create ...passed 00:07:31.607 Test: blob_esnap_thread_add_remove ...passed 00:07:31.607 Test: blob_esnap_clone_snapshot ...passed 00:07:31.607 Test: blob_esnap_clone_inflate ...passed 00:07:31.607 Test: blob_esnap_clone_decouple ...passed 00:07:31.607 Test: blob_esnap_clone_reload ...passed 00:07:31.607 Test: blob_esnap_hotplug ...passed 00:07:31.607 00:07:31.607 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.607 suites 16 16 n/a 0 0 00:07:31.607 tests 348 348 348 0 0 00:07:31.607 asserts 92605 92605 92605 0 n/a 00:07:31.607 00:07:31.607 Elapsed time = 12.947 seconds 00:07:31.865 22:15:30 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:31.865 00:07:31.865 00:07:31.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.865 http://cunit.sourceforge.net/ 00:07:31.865 00:07:31.865 00:07:31.865 Suite: blob_bdev 00:07:31.865 Test: create_bs_dev ...passed 00:07:31.865 Test: create_bs_dev_ro ...passed 00:07:31.865 Test: create_bs_dev_rw ...[2024-07-14 22:15:30.704544] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:31.865 passed 00:07:31.865 Test: claim_bs_dev ...passed 00:07:31.865 Test: claim_bs_dev_ro ...[2024-07-14 22:15:30.705019] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:31.865 passed 00:07:31.865 Test: deferred_destroy_refs ...passed 00:07:31.865 Test: deferred_destroy_channels ...passed 00:07:31.865 Test: deferred_destroy_threads ...passed 00:07:31.865 00:07:31.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.865 suites 1 1 n/a 0 0 00:07:31.865 tests 8 8 8 0 0 00:07:31.865 asserts 119 119 119 0 n/a 00:07:31.865 00:07:31.865 Elapsed time = 0.001 seconds 00:07:31.865 22:15:30 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:31.865 00:07:31.865 00:07:31.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.865 http://cunit.sourceforge.net/ 00:07:31.865 00:07:31.865 00:07:31.865 Suite: tree 00:07:31.865 Test: blobfs_tree_op_test ...passed 00:07:31.865 00:07:31.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.865 suites 1 1 n/a 0 0 00:07:31.865 tests 1 1 1 0 0 00:07:31.865 asserts 27 27 27 0 n/a 00:07:31.865 00:07:31.865 Elapsed time = 0.000 seconds 00:07:31.865 22:15:30 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:31.865 00:07:31.865 00:07:31.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.865 http://cunit.sourceforge.net/ 00:07:31.865 00:07:31.865 00:07:31.865 Suite: blobfs_async_ut 00:07:31.865 Test: fs_init ...passed 00:07:31.865 Test: fs_open ...passed 00:07:31.865 Test: fs_create ...passed 00:07:31.865 Test: fs_truncate ...passed 00:07:31.865 Test: fs_rename ...[2024-07-14 22:15:30.898148] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:31.865 passed 00:07:31.865 Test: fs_rw_async ...passed 00:07:32.122 Test: fs_writev_readv_async ...passed 00:07:32.122 Test: tree_find_buffer_ut ...passed 00:07:32.122 Test: channel_ops ...passed 00:07:32.122 Test: channel_ops_sync ...passed 00:07:32.122 00:07:32.122 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.122 suites 1 1 n/a 0 0 00:07:32.122 tests 10 10 10 0 0 00:07:32.122 asserts 292 292 292 0 n/a 00:07:32.122 00:07:32.122 Elapsed time = 0.182 seconds 00:07:32.122 22:15:30 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:32.122 00:07:32.122 00:07:32.122 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.122 http://cunit.sourceforge.net/ 00:07:32.122 00:07:32.122 00:07:32.122 Suite: blobfs_sync_ut 00:07:32.122 Test: cache_read_after_write ...[2024-07-14 22:15:31.087610] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:32.122 passed 00:07:32.122 Test: file_length ...passed 00:07:32.122 Test: append_write_to_extend_blob ...passed 00:07:32.122 Test: partial_buffer ...passed 00:07:32.122 Test: cache_write_null_buffer ...passed 00:07:32.122 Test: fs_create_sync ...passed 00:07:32.380 Test: fs_rename_sync ...passed 00:07:32.380 Test: cache_append_no_cache ...passed 00:07:32.380 Test: fs_delete_file_without_close ...passed 00:07:32.380 00:07:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.380 suites 1 1 n/a 0 0 00:07:32.380 tests 9 9 9 0 0 00:07:32.380 asserts 345 345 345 0 n/a 00:07:32.380 00:07:32.380 Elapsed time = 0.376 seconds 00:07:32.380 22:15:31 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:32.380 00:07:32.380 00:07:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.380 http://cunit.sourceforge.net/ 00:07:32.380 00:07:32.380 00:07:32.380 Suite: blobfs_bdev_ut 00:07:32.380 Test: spdk_blobfs_bdev_detect_test ...passed 00:07:32.380 Test: spdk_blobfs_bdev_create_test ...passed 00:07:32.380 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:32.380 00:07:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.380 suites 1 1 n/a 0 0 00:07:32.380 tests 3 3 3 0 0 00:07:32.380 asserts 9 9 9 0 n/a 00:07:32.380 00:07:32.380 Elapsed time = 0.001 seconds 00:07:32.380 [2024-07-14 22:15:31.280891] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:32.380 [2024-07-14 22:15:31.281258] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:32.380 00:07:32.380 real 0m13.667s 00:07:32.380 user 0m13.089s 00:07:32.380 sys 0m0.772s 00:07:32.380 22:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.380 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.380 ************************************ 00:07:32.380 END TEST unittest_blob_blobfs 00:07:32.380 ************************************ 00:07:32.380 22:15:31 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:07:32.380 22:15:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.380 22:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.380 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.380 ************************************ 00:07:32.380 START TEST unittest_event 00:07:32.380 ************************************ 00:07:32.380 22:15:31 -- common/autotest_common.sh@1104 -- # unittest_event 00:07:32.380 22:15:31 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:32.380 00:07:32.380 00:07:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.380 http://cunit.sourceforge.net/ 00:07:32.380 00:07:32.380 00:07:32.380 Suite: app_suite 00:07:32.380 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:32.380 options: 00:07:32.380 -c, --config JSON config file (default none) 00:07:32.380 --json JSON config file (default none) 00:07:32.380 --json-ignore-init-errors 00:07:32.380 don't exit on invalid config entry 00:07:32.380 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:32.380 -g, --single-file-segments 00:07:32.380 force creating just one hugetlbfs file 00:07:32.380 -h, --help show this usage 00:07:32.380 -i, --shm-id shared memory ID (optional) 00:07:32.380 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:32.380 --lcores lcore to CPU mapping list. The list is in the format: 00:07:32.380 [<,lcores[@CPUs]>...] 00:07:32.380 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:32.380 Within the group, '-' is used for range separator, 00:07:32.380 ',' is used for single number separator. 00:07:32.380 '( )' can be omitted for single element group, 00:07:32.380 '@' can be omitted if cpus and lcores have the same value 00:07:32.380 -n, --mem-channels channel number of memory channels used for DPDK 00:07:32.380 -p, --main-core main (primary) core for DPDK 00:07:32.380 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:32.380 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:32.380 --disable-cpumask-locks Disable CPU core lock files. 00:07:32.380 --silence-noticelog disable notice level logging to stderr 00:07:32.380 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:32.380 -u, --no-pci disable PCI access 00:07:32.380 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:32.380 --max-delay maximum reactor delay (in microseconds) 00:07:32.380 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:32.380 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:32.380 -R, --huge-unlink unlink huge files after initialization 00:07:32.380 -v, --version print SPDK version 00:07:32.380 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:32.380 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:32.380 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:32.380 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:32.380 Tracepoints vary in size and can use more than one trace entry. 00:07:32.380 --rpcs-allowed comma-separated list of permitted RPCS 00:07:32.380 --env-context Opaque context for use of the env implementation 00:07:32.380 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:32.380 --no-huge run without using hugepages 00:07:32.380 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:32.380 -e, --tpoint-group [:] 00:07:32.380 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:32.380 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:32.380 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:32.380 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:32.380 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:32.380 app_ut [options] 00:07:32.380 options: 00:07:32.380 -c, --config JSON config file (default none) 00:07:32.380 --json JSON config file (default none) 00:07:32.380 --json-ignore-init-errors 00:07:32.380 don't exit on invalid config entry 00:07:32.380 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:32.380 -g, --single-file-segments 00:07:32.380 force creating just one hugetlbfs file 00:07:32.380 -h, --help show this usage 00:07:32.380 -i, --shm-id shared memory ID (optional) 00:07:32.380 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:32.380 --lcores lcore to CPU mapping list. The list is in the format: 00:07:32.380 [<,lcores[@CPUs]>...] 00:07:32.380 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:32.380 Within the group, '-' is used for range separator, 00:07:32.380 ',' is used for single number separator. 00:07:32.380 '( )' can be omitted for single element group, 00:07:32.380 '@' can be omitted if cpus and lcores have the same value 00:07:32.380 -n, --mem-channels channel number of memory channels used for DPDK 00:07:32.380 -p, --main-core main (primary) core for DPDK 00:07:32.380 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:32.380 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:32.380 --disable-cpumask-locks Disable CPU core lock files. 00:07:32.380 --silence-noticelog disable notice level logging to stderr 00:07:32.380 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:32.380 -u, --no-pci disable PCI access 00:07:32.380 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:32.380 --max-delay maximum reactor delay (in microseconds) 00:07:32.380 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:32.380 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:32.380 -R, --huge-unlink unlink huge files after initialization 00:07:32.380 -v, --version print SPDK version 00:07:32.380 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:32.380 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:32.380 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:32.380 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:32.380 Tracepoints vary in size and can use more than one trace entry. 00:07:32.380 --rpcs-allowed comma-separated list of permitted RPCS 00:07:32.380 --env-context Opaque context for use of the env implementation 00:07:32.380 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:32.380 --no-huge run without using hugepages 00:07:32.380 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:32.380 -e, --tpoint-group [:] 00:07:32.380 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:32.380 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:32.380 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:32.380 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:32.380 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:32.380 app_ut: invalid option -- 'z' 00:07:32.380 app_ut: unrecognized option '--test-long-opt' 00:07:32.381 [2024-07-14 22:15:31.367495] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:32.381 app_ut [options] 00:07:32.381 options: 00:07:32.381 -c, --config JSON config file (default none) 00:07:32.381 --json JSON config file (default none) 00:07:32.381 --json-ignore-init-errors 00:07:32.381 don't exit on invalid config entry 00:07:32.381 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:32.381 -g, --single-file-segments 00:07:32.381 force creating just one hugetlbfs file 00:07:32.381 -h, --help show this usage 00:07:32.381 -i, --shm-id shared memory ID (optional) 00:07:32.381 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:32.381 --lcores lcore to CPU mapping list. The list is in the format: 00:07:32.381 [<,lcores[@CPUs]>...] 00:07:32.381 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:32.381 Within the group, '-' is used for range separator, 00:07:32.381 ',' is used for single number separator. 00:07:32.381 '( )' can be omitted for single element group, 00:07:32.381 '@' can be omitted if cpus and lcores have the same value 00:07:32.381 -n, --mem-channels channel number of memory channels used for DPDK 00:07:32.381 -p, --main-core main (primary) core for DPDK 00:07:32.381 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:32.381 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:32.381 --disable-cpumask-locks Disable CPU core lock files. 00:07:32.381 --silence-noticelog disable notice level logging to stderr 00:07:32.381 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:32.381 -u, --no-pci disable PCI access 00:07:32.381 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:32.381 --max-delay maximum reactor delay (in microseconds) 00:07:32.381 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:32.381 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:32.381 -R, --huge-unlink unlink huge files after initialization 00:07:32.381 -v, --version print SPDK version 00:07:32.381 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:32.381 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:32.381 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:32.381 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:32.381 Tracepoints vary in size and can use more than one trace entry. 00:07:32.381 --rpcs-allowed comma-separated list of permitted RPCS 00:07:32.381 --env-context Opaque context for use of the env implementation 00:07:32.381 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:32.381 --no-huge run without using hugepages 00:07:32.381 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:32.381 -e, --tpoint-group [:] 00:07:32.381 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:32.381 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:32.381 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:32.381 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:32.381 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:32.381 passed 00:07:32.381 00:07:32.381 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.381 suites 1 1 n/a 0 0 00:07:32.381 tests 1 1 1 0 0 00:07:32.381 asserts 8 8 8 0 n/a 00:07:32.381 00:07:32.381 Elapsed time = 0.001 seconds 00:07:32.381 [2024-07-14 22:15:31.368150] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:32.381 [2024-07-14 22:15:31.368393] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:32.381 22:15:31 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:32.381 00:07:32.381 00:07:32.381 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.381 http://cunit.sourceforge.net/ 00:07:32.381 00:07:32.381 00:07:32.381 Suite: app_suite 00:07:32.381 Test: test_create_reactor ...passed 00:07:32.381 Test: test_init_reactors ...passed 00:07:32.381 Test: test_event_call ...passed 00:07:32.381 Test: test_schedule_thread ...passed 00:07:32.381 Test: test_reschedule_thread ...passed 00:07:32.381 Test: test_bind_thread ...passed 00:07:32.381 Test: test_for_each_reactor ...passed 00:07:32.381 Test: test_reactor_stats ...passed 00:07:32.381 Test: test_scheduler ...passed 00:07:32.381 Test: test_governor ...passed 00:07:32.381 00:07:32.381 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.381 suites 1 1 n/a 0 0 00:07:32.381 tests 10 10 10 0 0 00:07:32.381 asserts 344 344 344 0 n/a 00:07:32.381 00:07:32.381 Elapsed time = 0.014 seconds 00:07:32.381 ************************************ 00:07:32.381 END TEST unittest_event 00:07:32.381 ************************************ 00:07:32.381 00:07:32.381 real 0m0.086s 00:07:32.381 user 0m0.062s 00:07:32.381 sys 0m0.025s 00:07:32.381 22:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.381 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.638 22:15:31 -- unit/unittest.sh@233 -- # uname -s 00:07:32.638 22:15:31 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:07:32.638 22:15:31 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:07:32.638 22:15:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.638 22:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.638 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.638 ************************************ 00:07:32.638 START TEST unittest_ftl 00:07:32.638 ************************************ 00:07:32.638 22:15:31 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:07:32.638 22:15:31 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:32.638 00:07:32.638 00:07:32.638 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.638 http://cunit.sourceforge.net/ 00:07:32.638 00:07:32.638 00:07:32.638 Suite: ftl_band_suite 00:07:32.638 Test: test_band_block_offset_from_addr_base ...passed 00:07:32.638 Test: test_band_block_offset_from_addr_offset ...passed 00:07:32.638 Test: test_band_addr_from_block_offset ...passed 00:07:32.638 Test: test_band_set_addr ...passed 00:07:32.638 Test: test_invalidate_addr ...passed 00:07:32.638 Test: test_next_xfer_addr ...passed 00:07:32.638 00:07:32.638 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.638 suites 1 1 n/a 0 0 00:07:32.638 tests 6 6 6 0 0 00:07:32.638 asserts 30356 30356 30356 0 n/a 00:07:32.638 00:07:32.638 Elapsed time = 0.178 seconds 00:07:32.896 22:15:31 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:32.896 00:07:32.896 00:07:32.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.896 http://cunit.sourceforge.net/ 00:07:32.896 00:07:32.896 00:07:32.896 Suite: ftl_bitmap 00:07:32.896 Test: test_ftl_bitmap_create ...[2024-07-14 22:15:31.743539] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:32.896 [2024-07-14 22:15:31.744000] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:32.896 passed 00:07:32.896 Test: test_ftl_bitmap_get ...passed 00:07:32.896 Test: test_ftl_bitmap_set ...passed 00:07:32.896 Test: test_ftl_bitmap_clear ...passed 00:07:32.896 Test: test_ftl_bitmap_find_first_set ...passed 00:07:32.896 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:32.896 Test: test_ftl_bitmap_count_set ...passed 00:07:32.896 00:07:32.896 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.896 suites 1 1 n/a 0 0 00:07:32.896 tests 7 7 7 0 0 00:07:32.896 asserts 137 137 137 0 n/a 00:07:32.896 00:07:32.896 Elapsed time = 0.001 seconds 00:07:32.896 22:15:31 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:32.896 00:07:32.896 00:07:32.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.896 http://cunit.sourceforge.net/ 00:07:32.896 00:07:32.896 00:07:32.896 Suite: ftl_io_suite 00:07:32.896 Test: test_completion ...passed 00:07:32.896 Test: test_multiple_ios ...passed 00:07:32.896 00:07:32.896 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.896 suites 1 1 n/a 0 0 00:07:32.896 tests 2 2 2 0 0 00:07:32.896 asserts 47 47 47 0 n/a 00:07:32.896 00:07:32.896 Elapsed time = 0.003 seconds 00:07:32.896 22:15:31 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:32.896 00:07:32.896 00:07:32.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.896 http://cunit.sourceforge.net/ 00:07:32.896 00:07:32.896 00:07:32.896 Suite: ftl_mngt 00:07:32.896 Test: test_next_step ...passed 00:07:32.896 Test: test_continue_step ...passed 00:07:32.896 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:32.896 Test: test_fail_step ...passed 00:07:32.896 Test: test_mngt_call_and_call_rollback ...passed 00:07:32.896 Test: test_nested_process_failure ...passed 00:07:32.896 00:07:32.896 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.896 suites 1 1 n/a 0 0 00:07:32.896 tests 6 6 6 0 0 00:07:32.896 asserts 176 176 176 0 n/a 00:07:32.896 00:07:32.896 Elapsed time = 0.001 seconds 00:07:32.896 22:15:31 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:32.896 00:07:32.896 00:07:32.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.896 http://cunit.sourceforge.net/ 00:07:32.896 00:07:32.896 00:07:32.896 Suite: ftl_mempool 00:07:32.896 Test: test_ftl_mempool_create ...passed 00:07:32.896 Test: test_ftl_mempool_get_put ...passed 00:07:32.896 00:07:32.896 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.896 suites 1 1 n/a 0 0 00:07:32.896 tests 2 2 2 0 0 00:07:32.896 asserts 36 36 36 0 n/a 00:07:32.896 00:07:32.896 Elapsed time = 0.000 seconds 00:07:32.896 22:15:31 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:32.896 00:07:32.896 00:07:32.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.896 http://cunit.sourceforge.net/ 00:07:32.896 00:07:32.896 00:07:32.896 Suite: ftl_addr64_suite 00:07:32.896 Test: test_addr_cached ...passed 00:07:32.896 00:07:32.896 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.896 suites 1 1 n/a 0 0 00:07:32.896 tests 1 1 1 0 0 00:07:32.896 asserts 1536 1536 1536 0 n/a 00:07:32.896 00:07:32.896 Elapsed time = 0.000 seconds 00:07:32.896 22:15:31 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:32.896 00:07:32.896 00:07:32.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.896 http://cunit.sourceforge.net/ 00:07:32.896 00:07:32.896 00:07:32.896 Suite: ftl_sb 00:07:32.896 Test: test_sb_crc_v2 ...passed 00:07:32.896 Test: test_sb_crc_v3 ...passed 00:07:32.897 Test: test_sb_v3_md_layout ...[2024-07-14 22:15:31.879142] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:32.897 [2024-07-14 22:15:31.879732] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:32.897 [2024-07-14 22:15:31.879978] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:32.897 [2024-07-14 22:15:31.880215] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:32.897 [2024-07-14 22:15:31.880420] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:32.897 [2024-07-14 22:15:31.880665] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:32.897 [2024-07-14 22:15:31.880813] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:32.897 [2024-07-14 22:15:31.880976] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:32.897 [2024-07-14 22:15:31.881103] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:32.897 [2024-07-14 22:15:31.881179] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:32.897 [2024-07-14 22:15:31.881299] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:32.897 passed 00:07:32.897 Test: test_sb_v5_md_layout ...passed 00:07:32.897 00:07:32.897 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.897 suites 1 1 n/a 0 0 00:07:32.897 tests 4 4 4 0 0 00:07:32.897 asserts 148 148 148 0 n/a 00:07:32.897 00:07:32.897 Elapsed time = 0.003 seconds 00:07:32.897 22:15:31 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:32.897 00:07:32.897 00:07:32.897 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.897 http://cunit.sourceforge.net/ 00:07:32.897 00:07:32.897 00:07:32.897 Suite: ftl_layout_upgrade 00:07:32.897 Test: test_l2p_upgrade ...passed 00:07:32.897 00:07:32.897 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.897 suites 1 1 n/a 0 0 00:07:32.897 tests 1 1 1 0 0 00:07:32.897 asserts 140 140 140 0 n/a 00:07:32.897 00:07:32.897 Elapsed time = 0.000 seconds 00:07:32.897 00:07:32.897 real 0m0.435s 00:07:32.897 user 0m0.233s 00:07:32.897 sys 0m0.195s 00:07:32.897 22:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.897 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.897 ************************************ 00:07:32.897 END TEST unittest_ftl 00:07:32.897 ************************************ 00:07:33.156 22:15:31 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:33.156 22:15:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.156 22:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.156 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:33.156 ************************************ 00:07:33.156 START TEST unittest_accel 00:07:33.156 ************************************ 00:07:33.156 22:15:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:33.156 00:07:33.156 00:07:33.156 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.156 http://cunit.sourceforge.net/ 00:07:33.156 00:07:33.156 00:07:33.156 Suite: accel_sequence 00:07:33.156 Test: test_sequence_fill_copy ...passed 00:07:33.156 Test: test_sequence_abort ...passed 00:07:33.156 Test: test_sequence_append_error ...passed 00:07:33.156 Test: test_sequence_completion_error ...[2024-07-14 22:15:31.998718] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f35335027c0 00:07:33.156 [2024-07-14 22:15:31.999197] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f35335027c0 00:07:33.156 [2024-07-14 22:15:31.999365] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f35335027c0 00:07:33.156 [2024-07-14 22:15:31.999467] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f35335027c0 00:07:33.156 passed 00:07:33.156 Test: test_sequence_decompress ...passed 00:07:33.156 Test: test_sequence_reverse ...passed 00:07:33.156 Test: test_sequence_copy_elision ...passed 00:07:33.156 Test: test_sequence_accel_buffers ...passed 00:07:33.156 Test: test_sequence_memory_domain ...[2024-07-14 22:15:32.012599] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:33.156 [2024-07-14 22:15:32.012924] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:33.156 passed 00:07:33.156 Test: test_sequence_module_memory_domain ...passed 00:07:33.156 Test: test_sequence_crypto ...passed 00:07:33.156 Test: test_sequence_driver ...[2024-07-14 22:15:32.020800] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f35328da7c0 using driver: ut 00:07:33.156 [2024-07-14 22:15:32.021040] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f35328da7c0 through driver: ut 00:07:33.156 passed 00:07:33.156 Test: test_sequence_same_iovs ...passed 00:07:33.156 Test: test_sequence_crc32 ...passed 00:07:33.156 Suite: accel 00:07:33.156 Test: test_spdk_accel_task_complete ...passed 00:07:33.156 Test: test_get_task ...passed 00:07:33.156 Test: test_spdk_accel_submit_copy ...passed 00:07:33.156 Test: test_spdk_accel_submit_dualcast ...[2024-07-14 22:15:32.027448] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:33.156 [2024-07-14 22:15:32.027650] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:33.156 passed 00:07:33.156 Test: test_spdk_accel_submit_compare ...passed 00:07:33.156 Test: test_spdk_accel_submit_fill ...passed 00:07:33.156 Test: test_spdk_accel_submit_crc32c ...passed 00:07:33.156 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:33.156 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:33.156 Test: test_spdk_accel_submit_xor ...passed 00:07:33.156 Test: test_spdk_accel_module_find_by_name ...passed 00:07:33.156 Test: test_spdk_accel_module_register ...passed 00:07:33.156 00:07:33.156 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.156 suites 2 2 n/a 0 0 00:07:33.156 tests 26 26 26 0 0 00:07:33.156 asserts 831 831 831 0 n/a 00:07:33.156 00:07:33.156 Elapsed time = 0.038 seconds 00:07:33.156 ************************************ 00:07:33.156 END TEST unittest_accel 00:07:33.156 ************************************ 00:07:33.156 00:07:33.156 real 0m0.076s 00:07:33.156 user 0m0.044s 00:07:33.156 sys 0m0.028s 00:07:33.156 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.156 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.156 22:15:32 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:33.156 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.156 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.156 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.156 ************************************ 00:07:33.156 START TEST unittest_ioat 00:07:33.156 ************************************ 00:07:33.156 22:15:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:33.156 00:07:33.156 00:07:33.156 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.156 http://cunit.sourceforge.net/ 00:07:33.156 00:07:33.156 00:07:33.156 Suite: ioat 00:07:33.156 Test: ioat_state_check ...passed 00:07:33.156 00:07:33.156 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.156 suites 1 1 n/a 0 0 00:07:33.156 tests 1 1 1 0 0 00:07:33.156 asserts 32 32 32 0 n/a 00:07:33.156 00:07:33.156 Elapsed time = 0.000 seconds 00:07:33.156 ************************************ 00:07:33.156 END TEST unittest_ioat 00:07:33.156 ************************************ 00:07:33.156 00:07:33.156 real 0m0.034s 00:07:33.156 user 0m0.015s 00:07:33.156 sys 0m0.018s 00:07:33.156 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.156 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.156 22:15:32 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:33.156 22:15:32 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:33.156 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.156 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.156 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.156 ************************************ 00:07:33.156 START TEST unittest_idxd_user 00:07:33.156 ************************************ 00:07:33.156 22:15:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:33.156 00:07:33.156 00:07:33.156 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.156 http://cunit.sourceforge.net/ 00:07:33.156 00:07:33.156 00:07:33.156 Suite: idxd_user 00:07:33.156 Test: test_idxd_wait_cmd ...[2024-07-14 22:15:32.195752] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:33.156 [2024-07-14 22:15:32.196399] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:33.156 passed 00:07:33.156 Test: test_idxd_reset_dev ...[2024-07-14 22:15:32.196935] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:33.157 [2024-07-14 22:15:32.197108] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:33.157 passed 00:07:33.157 Test: test_idxd_group_config ...passed 00:07:33.157 Test: test_idxd_wq_config ...passed 00:07:33.157 00:07:33.157 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.157 suites 1 1 n/a 0 0 00:07:33.157 tests 4 4 4 0 0 00:07:33.157 asserts 20 20 20 0 n/a 00:07:33.157 00:07:33.157 Elapsed time = 0.001 seconds 00:07:33.157 ************************************ 00:07:33.157 END TEST unittest_idxd_user 00:07:33.157 ************************************ 00:07:33.157 00:07:33.157 real 0m0.034s 00:07:33.157 user 0m0.020s 00:07:33.157 sys 0m0.012s 00:07:33.157 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.157 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 22:15:32 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:07:33.415 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.415 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.415 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 ************************************ 00:07:33.415 START TEST unittest_iscsi 00:07:33.415 ************************************ 00:07:33.415 22:15:32 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:07:33.415 22:15:32 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:33.415 00:07:33.415 00:07:33.415 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.415 http://cunit.sourceforge.net/ 00:07:33.415 00:07:33.415 00:07:33.415 Suite: conn_suite 00:07:33.415 Test: read_task_split_in_order_case ...passed 00:07:33.415 Test: read_task_split_reverse_order_case ...passed 00:07:33.415 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:33.415 Test: process_non_read_task_completion_test ...passed 00:07:33.415 Test: free_tasks_on_connection ...passed 00:07:33.415 Test: free_tasks_with_queued_datain ...passed 00:07:33.415 Test: abort_queued_datain_task_test ...passed 00:07:33.415 Test: abort_queued_datain_tasks_test ...passed 00:07:33.415 00:07:33.415 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.415 suites 1 1 n/a 0 0 00:07:33.415 tests 8 8 8 0 0 00:07:33.415 asserts 230 230 230 0 n/a 00:07:33.415 00:07:33.415 Elapsed time = 0.000 seconds 00:07:33.415 22:15:32 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:33.415 00:07:33.415 00:07:33.415 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.415 http://cunit.sourceforge.net/ 00:07:33.415 00:07:33.415 00:07:33.415 Suite: iscsi_suite 00:07:33.415 Test: param_negotiation_test ...passed 00:07:33.415 Test: list_negotiation_test ...passed 00:07:33.415 Test: parse_valid_test ...passed 00:07:33.415 Test: parse_invalid_test ...[2024-07-14 22:15:32.317735] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:33.415 [2024-07-14 22:15:32.318117] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:33.415 [2024-07-14 22:15:32.318294] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:33.415 [2024-07-14 22:15:32.318520] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:33.415 [2024-07-14 22:15:32.318765] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:33.415 [2024-07-14 22:15:32.318926] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:33.415 [2024-07-14 22:15:32.319176] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:33.415 passed 00:07:33.415 00:07:33.415 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.415 suites 1 1 n/a 0 0 00:07:33.415 tests 4 4 4 0 0 00:07:33.415 asserts 161 161 161 0 n/a 00:07:33.415 00:07:33.415 Elapsed time = 0.005 seconds 00:07:33.415 22:15:32 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:33.415 00:07:33.415 00:07:33.415 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.415 http://cunit.sourceforge.net/ 00:07:33.415 00:07:33.415 00:07:33.415 Suite: iscsi_target_node_suite 00:07:33.415 Test: add_lun_test_cases ...[2024-07-14 22:15:32.354706] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:33.415 [2024-07-14 22:15:32.355091] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:33.415 [2024-07-14 22:15:32.355282] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:33.415 [2024-07-14 22:15:32.355460] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:33.415 [2024-07-14 22:15:32.355623] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:33.415 passed 00:07:33.415 Test: allow_any_allowed ...passed 00:07:33.415 Test: allow_ipv6_allowed ...passed 00:07:33.415 Test: allow_ipv6_denied ...passed 00:07:33.415 Test: allow_ipv6_invalid ...passed 00:07:33.415 Test: allow_ipv4_allowed ...passed 00:07:33.415 Test: allow_ipv4_denied ...passed 00:07:33.415 Test: allow_ipv4_invalid ...passed 00:07:33.415 Test: node_access_allowed ...passed 00:07:33.415 Test: node_access_denied_by_empty_netmask ...passed 00:07:33.415 Test: node_access_multi_initiator_groups_cases ...passed 00:07:33.415 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:33.415 Test: chap_param_test_cases ...[2024-07-14 22:15:32.357934] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:33.415 [2024-07-14 22:15:32.358073] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:33.415 [2024-07-14 22:15:32.358222] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:33.415 [2024-07-14 22:15:32.358341] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:33.415 passed[2024-07-14 22:15:32.358471] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:33.415 00:07:33.415 00:07:33.415 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.415 suites 1 1 n/a 0 0 00:07:33.415 tests 13 13 13 0 0 00:07:33.415 asserts 50 50 50 0 n/a 00:07:33.415 00:07:33.415 Elapsed time = 0.002 seconds 00:07:33.415 22:15:32 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:33.415 00:07:33.415 00:07:33.415 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.415 http://cunit.sourceforge.net/ 00:07:33.415 00:07:33.415 00:07:33.415 Suite: iscsi_suite 00:07:33.416 Test: op_login_check_target_test ...[2024-07-14 22:15:32.392368] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:33.416 passed 00:07:33.416 Test: op_login_session_normal_test ...[2024-07-14 22:15:32.393002] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:33.416 [2024-07-14 22:15:32.393158] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:33.416 [2024-07-14 22:15:32.393236] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:33.416 [2024-07-14 22:15:32.393352] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:33.416 [2024-07-14 22:15:32.393519] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:33.416 [2024-07-14 22:15:32.393702] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:33.416 [2024-07-14 22:15:32.393861] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:33.416 passed 00:07:33.416 Test: maxburstlength_test ...[2024-07-14 22:15:32.394226] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:33.416 [2024-07-14 22:15:32.394411] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:33.416 passed 00:07:33.416 Test: underflow_for_read_transfer_test ...passed 00:07:33.416 Test: underflow_for_zero_read_transfer_test ...passed 00:07:33.416 Test: underflow_for_request_sense_test ...passed 00:07:33.416 Test: underflow_for_check_condition_test ...passed 00:07:33.416 Test: add_transfer_task_test ...passed 00:07:33.416 Test: get_transfer_task_test ...passed 00:07:33.416 Test: del_transfer_task_test ...passed 00:07:33.416 Test: clear_all_transfer_tasks_test ...passed 00:07:33.416 Test: build_iovs_test ...passed 00:07:33.416 Test: build_iovs_with_md_test ...passed 00:07:33.416 Test: pdu_hdr_op_login_test ...[2024-07-14 22:15:32.397666] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:33.416 [2024-07-14 22:15:32.397887] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:33.416 [2024-07-14 22:15:32.398083] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:33.416 passed 00:07:33.416 Test: pdu_hdr_op_text_test ...[2024-07-14 22:15:32.398498] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:33.416 [2024-07-14 22:15:32.398704] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:33.416 [2024-07-14 22:15:32.398848] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:33.416 passed 00:07:33.416 Test: pdu_hdr_op_logout_test ...[2024-07-14 22:15:32.399245] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:33.416 passed 00:07:33.416 Test: pdu_hdr_op_scsi_test ...[2024-07-14 22:15:32.399682] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:33.416 [2024-07-14 22:15:32.399859] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:33.416 [2024-07-14 22:15:32.399944] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:33.416 [2024-07-14 22:15:32.400087] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:33.416 [2024-07-14 22:15:32.400299] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:33.416 [2024-07-14 22:15:32.400592] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:33.416 passed 00:07:33.416 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-14 22:15:32.400996] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:33.416 [2024-07-14 22:15:32.401165] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:33.416 passed 00:07:33.416 Test: pdu_hdr_op_nopout_test ...[2024-07-14 22:15:32.401691] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:33.416 [2024-07-14 22:15:32.401883] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:33.416 [2024-07-14 22:15:32.402040] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:33.416 [2024-07-14 22:15:32.402189] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:33.416 passed 00:07:33.416 Test: pdu_hdr_op_data_test ...[2024-07-14 22:15:32.402375] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:33.416 [2024-07-14 22:15:32.402640] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:33.416 [2024-07-14 22:15:32.402830] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:33.416 [2024-07-14 22:15:32.402995] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:33.416 [2024-07-14 22:15:32.403183] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:33.416 [2024-07-14 22:15:32.403375] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:33.416 [2024-07-14 22:15:32.403523] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:33.416 passed 00:07:33.416 Test: empty_text_with_cbit_test ...passed 00:07:33.416 Test: pdu_payload_read_test ...[2024-07-14 22:15:32.406120] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:33.416 passed 00:07:33.416 Test: data_out_pdu_sequence_test ...passed 00:07:33.416 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:33.416 00:07:33.416 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.416 suites 1 1 n/a 0 0 00:07:33.416 tests 24 24 24 0 0 00:07:33.416 asserts 150253 150253 150253 0 n/a 00:07:33.416 00:07:33.416 Elapsed time = 0.018 seconds 00:07:33.416 22:15:32 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:33.416 00:07:33.416 00:07:33.416 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.416 http://cunit.sourceforge.net/ 00:07:33.416 00:07:33.416 00:07:33.416 Suite: init_grp_suite 00:07:33.416 Test: create_initiator_group_success_case ...passed 00:07:33.416 Test: find_initiator_group_success_case ...passed 00:07:33.416 Test: register_initiator_group_twice_case ...passed 00:07:33.416 Test: add_initiator_name_success_case ...passed 00:07:33.416 Test: add_initiator_name_fail_case ...[2024-07-14 22:15:32.445381] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:33.416 passed 00:07:33.416 Test: delete_all_initiator_names_success_case ...passed 00:07:33.416 Test: add_netmask_success_case ...passed 00:07:33.416 Test: add_netmask_fail_case ...[2024-07-14 22:15:32.446358] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:33.416 passed 00:07:33.416 Test: delete_all_netmasks_success_case ...passed 00:07:33.416 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:33.416 Test: netmask_overwrite_all_to_any_case ...passed 00:07:33.416 Test: add_delete_initiator_names_case ...passed 00:07:33.416 Test: add_duplicated_initiator_names_case ...passed 00:07:33.416 Test: delete_nonexisting_initiator_names_case ...passed 00:07:33.416 Test: add_delete_netmasks_case ...passed 00:07:33.416 Test: add_duplicated_netmasks_case ...passed 00:07:33.416 Test: delete_nonexisting_netmasks_case ...passed 00:07:33.416 00:07:33.416 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.416 suites 1 1 n/a 0 0 00:07:33.416 tests 17 17 17 0 0 00:07:33.416 asserts 108 108 108 0 n/a 00:07:33.416 00:07:33.416 Elapsed time = 0.001 seconds 00:07:33.416 22:15:32 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:33.678 00:07:33.678 00:07:33.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.678 http://cunit.sourceforge.net/ 00:07:33.678 00:07:33.678 00:07:33.678 Suite: portal_grp_suite 00:07:33.678 Test: portal_create_ipv4_normal_case ...passed 00:07:33.678 Test: portal_create_ipv6_normal_case ...passed 00:07:33.678 Test: portal_create_ipv4_wildcard_case ...passed 00:07:33.678 Test: portal_create_ipv6_wildcard_case ...passed 00:07:33.678 Test: portal_create_twice_case ...[2024-07-14 22:15:32.484373] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:33.678 passed 00:07:33.678 Test: portal_grp_register_unregister_case ...passed 00:07:33.678 Test: portal_grp_register_twice_case ...passed 00:07:33.678 Test: portal_grp_add_delete_case ...passed 00:07:33.678 Test: portal_grp_add_delete_twice_case ...passed 00:07:33.678 00:07:33.678 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.678 suites 1 1 n/a 0 0 00:07:33.678 tests 9 9 9 0 0 00:07:33.678 asserts 44 44 44 0 n/a 00:07:33.678 00:07:33.678 Elapsed time = 0.003 seconds 00:07:33.678 00:07:33.678 real 0m0.244s 00:07:33.678 user 0m0.148s 00:07:33.678 sys 0m0.080s 00:07:33.678 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.678 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.678 ************************************ 00:07:33.678 END TEST unittest_iscsi 00:07:33.678 ************************************ 00:07:33.678 22:15:32 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:07:33.678 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.678 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.678 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.678 ************************************ 00:07:33.678 START TEST unittest_json 00:07:33.678 ************************************ 00:07:33.678 22:15:32 -- common/autotest_common.sh@1104 -- # unittest_json 00:07:33.678 22:15:32 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:33.678 00:07:33.678 00:07:33.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.678 http://cunit.sourceforge.net/ 00:07:33.678 00:07:33.678 00:07:33.678 Suite: json 00:07:33.678 Test: test_parse_literal ...passed 00:07:33.678 Test: test_parse_string_simple ...passed 00:07:33.678 Test: test_parse_string_control_chars ...passed 00:07:33.678 Test: test_parse_string_utf8 ...passed 00:07:33.678 Test: test_parse_string_escapes_twochar ...passed 00:07:33.678 Test: test_parse_string_escapes_unicode ...passed 00:07:33.678 Test: test_parse_number ...passed 00:07:33.678 Test: test_parse_array ...passed 00:07:33.678 Test: test_parse_object ...passed 00:07:33.678 Test: test_parse_nesting ...passed 00:07:33.678 Test: test_parse_comment ...passed 00:07:33.678 00:07:33.678 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.678 suites 1 1 n/a 0 0 00:07:33.678 tests 11 11 11 0 0 00:07:33.678 asserts 1516 1516 1516 0 n/a 00:07:33.678 00:07:33.678 Elapsed time = 0.002 seconds 00:07:33.678 22:15:32 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:33.678 00:07:33.678 00:07:33.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.678 http://cunit.sourceforge.net/ 00:07:33.678 00:07:33.678 00:07:33.678 Suite: json 00:07:33.678 Test: test_strequal ...passed 00:07:33.678 Test: test_num_to_uint16 ...passed 00:07:33.678 Test: test_num_to_int32 ...passed 00:07:33.678 Test: test_num_to_uint64 ...passed 00:07:33.678 Test: test_decode_object ...passed 00:07:33.678 Test: test_decode_array ...passed 00:07:33.678 Test: test_decode_bool ...passed 00:07:33.678 Test: test_decode_uint16 ...passed 00:07:33.678 Test: test_decode_int32 ...passed 00:07:33.678 Test: test_decode_uint32 ...passed 00:07:33.678 Test: test_decode_uint64 ...passed 00:07:33.678 Test: test_decode_string ...passed 00:07:33.678 Test: test_decode_uuid ...passed 00:07:33.678 Test: test_find ...passed 00:07:33.678 Test: test_find_array ...passed 00:07:33.678 Test: test_iterating ...passed 00:07:33.678 Test: test_free_object ...passed 00:07:33.678 00:07:33.678 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.678 suites 1 1 n/a 0 0 00:07:33.678 tests 17 17 17 0 0 00:07:33.678 asserts 236 236 236 0 n/a 00:07:33.678 00:07:33.678 Elapsed time = 0.001 seconds 00:07:33.678 22:15:32 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:33.678 00:07:33.678 00:07:33.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.678 http://cunit.sourceforge.net/ 00:07:33.678 00:07:33.678 00:07:33.678 Suite: json 00:07:33.678 Test: test_write_literal ...passed 00:07:33.678 Test: test_write_string_simple ...passed 00:07:33.678 Test: test_write_string_escapes ...passed 00:07:33.678 Test: test_write_string_utf16le ...passed 00:07:33.678 Test: test_write_number_int32 ...passed 00:07:33.678 Test: test_write_number_uint32 ...passed 00:07:33.678 Test: test_write_number_uint128 ...passed 00:07:33.678 Test: test_write_string_number_uint128 ...passed 00:07:33.678 Test: test_write_number_int64 ...passed 00:07:33.678 Test: test_write_number_uint64 ...passed 00:07:33.678 Test: test_write_number_double ...passed 00:07:33.678 Test: test_write_uuid ...passed 00:07:33.678 Test: test_write_array ...passed 00:07:33.678 Test: test_write_object ...passed 00:07:33.678 Test: test_write_nesting ...passed 00:07:33.678 Test: test_write_val ...passed 00:07:33.678 00:07:33.678 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.678 suites 1 1 n/a 0 0 00:07:33.678 tests 16 16 16 0 0 00:07:33.678 asserts 918 918 918 0 n/a 00:07:33.678 00:07:33.678 Elapsed time = 0.004 seconds 00:07:33.678 22:15:32 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:33.678 00:07:33.678 00:07:33.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.678 http://cunit.sourceforge.net/ 00:07:33.678 00:07:33.678 00:07:33.678 Suite: jsonrpc 00:07:33.678 Test: test_parse_request ...passed 00:07:33.678 Test: test_parse_request_streaming ...passed 00:07:33.678 00:07:33.678 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.678 suites 1 1 n/a 0 0 00:07:33.678 tests 2 2 2 0 0 00:07:33.678 asserts 289 289 289 0 n/a 00:07:33.678 00:07:33.678 Elapsed time = 0.003 seconds 00:07:33.678 ************************************ 00:07:33.678 END TEST unittest_json 00:07:33.678 ************************************ 00:07:33.678 00:07:33.678 real 0m0.136s 00:07:33.678 user 0m0.060s 00:07:33.678 sys 0m0.070s 00:07:33.678 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.678 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.678 22:15:32 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:07:33.678 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.678 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.678 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.981 ************************************ 00:07:33.981 START TEST unittest_rpc 00:07:33.981 ************************************ 00:07:33.981 22:15:32 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:07:33.981 22:15:32 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:33.981 00:07:33.981 00:07:33.981 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.981 http://cunit.sourceforge.net/ 00:07:33.981 00:07:33.981 00:07:33.981 Suite: rpc 00:07:33.981 Test: test_jsonrpc_handler ...passed 00:07:33.981 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:33.981 Test: test_rpc_get_methods ...[2024-07-14 22:15:32.757129] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:33.981 passed 00:07:33.981 Test: test_rpc_spdk_get_version ...passed 00:07:33.981 Test: test_spdk_rpc_listen_close ...passed 00:07:33.981 00:07:33.981 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.981 suites 1 1 n/a 0 0 00:07:33.981 tests 5 5 5 0 0 00:07:33.981 asserts 20 20 20 0 n/a 00:07:33.981 00:07:33.981 Elapsed time = 0.000 seconds 00:07:33.981 ************************************ 00:07:33.981 END TEST unittest_rpc 00:07:33.981 ************************************ 00:07:33.981 00:07:33.981 real 0m0.030s 00:07:33.981 user 0m0.014s 00:07:33.981 sys 0m0.016s 00:07:33.981 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.981 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.981 22:15:32 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:33.981 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.981 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.981 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.981 ************************************ 00:07:33.981 START TEST unittest_notify 00:07:33.981 ************************************ 00:07:33.981 22:15:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:33.981 00:07:33.981 00:07:33.981 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.981 http://cunit.sourceforge.net/ 00:07:33.981 00:07:33.981 00:07:33.981 Suite: app_suite 00:07:33.981 Test: notify ...passed 00:07:33.981 00:07:33.981 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.981 suites 1 1 n/a 0 0 00:07:33.981 tests 1 1 1 0 0 00:07:33.982 asserts 13 13 13 0 n/a 00:07:33.982 00:07:33.982 Elapsed time = 0.000 seconds 00:07:33.982 ************************************ 00:07:33.982 END TEST unittest_notify 00:07:33.982 ************************************ 00:07:33.982 00:07:33.982 real 0m0.032s 00:07:33.982 user 0m0.020s 00:07:33.982 sys 0m0.012s 00:07:33.982 22:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.982 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 22:15:32 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:07:33.982 22:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.982 22:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.982 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 ************************************ 00:07:33.982 START TEST unittest_nvme 00:07:33.982 ************************************ 00:07:33.982 22:15:32 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:07:33.982 22:15:32 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:33.982 00:07:33.982 00:07:33.982 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.982 http://cunit.sourceforge.net/ 00:07:33.982 00:07:33.982 00:07:33.982 Suite: nvme 00:07:33.982 Test: test_opc_data_transfer ...passed 00:07:33.982 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:33.982 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:33.982 Test: test_trid_parse_and_compare ...[2024-07-14 22:15:32.913454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:33.982 [2024-07-14 22:15:32.913851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:33.982 [2024-07-14 22:15:32.914034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:33.982 [2024-07-14 22:15:32.914181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:33.982 [2024-07-14 22:15:32.914326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:33.982 [2024-07-14 22:15:32.914435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:33.982 passed 00:07:33.982 Test: test_trid_trtype_str ...passed 00:07:33.982 Test: test_trid_adrfam_str ...passed 00:07:33.982 Test: test_nvme_ctrlr_probe ...[2024-07-14 22:15:32.915247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:33.982 passed 00:07:33.982 Test: test_spdk_nvme_probe ...[2024-07-14 22:15:32.915652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:33.982 [2024-07-14 22:15:32.915796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:33.982 [2024-07-14 22:15:32.915920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:33.982 [2024-07-14 22:15:32.916075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:33.982 passed 00:07:33.982 Test: test_spdk_nvme_connect ...[2024-07-14 22:15:32.916493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:33.982 [2024-07-14 22:15:32.916964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:33.982 [2024-07-14 22:15:32.917142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:33.982 passed 00:07:33.982 Test: test_nvme_ctrlr_probe_internal ...[2024-07-14 22:15:32.917500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:33.982 [2024-07-14 22:15:32.917661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:33.982 passed 00:07:33.982 Test: test_nvme_init_controllers ...[2024-07-14 22:15:32.917989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:33.982 passed 00:07:33.982 Test: test_nvme_driver_init ...[2024-07-14 22:15:32.918386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:33.982 [2024-07-14 22:15:32.918536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:33.982 [2024-07-14 22:15:33.032624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:33.982 [2024-07-14 22:15:33.032958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:33.982 passed 00:07:33.982 Test: test_spdk_nvme_detach ...passed 00:07:33.982 Test: test_nvme_completion_poll_cb ...passed 00:07:33.982 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:33.982 Test: test_nvme_allocate_request_null ...passed 00:07:33.982 Test: test_nvme_allocate_request ...passed 00:07:33.982 Test: test_nvme_free_request ...passed 00:07:33.982 Test: test_nvme_allocate_request_user_copy ...passed 00:07:33.982 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:33.982 Test: test_nvme_request_check_timeout ...passed 00:07:33.982 Test: test_nvme_wait_for_completion ...passed 00:07:33.982 Test: test_spdk_nvme_parse_func ...passed 00:07:33.982 Test: test_spdk_nvme_detach_async ...passed 00:07:33.982 Test: test_nvme_parse_addr ...[2024-07-14 22:15:33.036943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:33.982 passed 00:07:33.982 00:07:33.982 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.982 suites 1 1 n/a 0 0 00:07:33.982 tests 25 25 25 0 0 00:07:33.982 asserts 326 326 326 0 n/a 00:07:33.982 00:07:33.982 Elapsed time = 0.007 seconds 00:07:34.240 22:15:33 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:34.240 00:07:34.240 00:07:34.240 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.240 http://cunit.sourceforge.net/ 00:07:34.240 00:07:34.240 00:07:34.240 Suite: nvme_ctrlr 00:07:34.240 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-14 22:15:33.074498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.240 passed 00:07:34.240 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-14 22:15:33.076663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 passed 00:07:34.241 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-14 22:15:33.078342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 passed 00:07:34.241 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-14 22:15:33.080065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 passed 00:07:34.241 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-14 22:15:33.081755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 [2024-07-14 22:15:33.083102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 22:15:33.084497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 22:15:33.085892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:34.241 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-14 22:15:33.088790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 [2024-07-14 22:15:33.091284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 22:15:33.092622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:34.241 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-14 22:15:33.095530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 [2024-07-14 22:15:33.096923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 22:15:33.099524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:34.241 Test: test_nvme_ctrlr_init_delay ...[2024-07-14 22:15:33.102518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 passed 00:07:34.241 Test: test_alloc_io_qpair_rr_1 ...[2024-07-14 22:15:33.104366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 [2024-07-14 22:15:33.104737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:34.241 [2024-07-14 22:15:33.105164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:34.241 [2024-07-14 22:15:33.105438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:34.241 [2024-07-14 22:15:33.105687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:34.241 passed 00:07:34.241 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:34.241 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:34.241 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-14 22:15:33.106951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 passed 00:07:34.241 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-14 22:15:33.107682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.241 [2024-07-14 22:15:33.108074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:34.241 passed 00:07:34.241 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-14 22:15:33.108946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:34.241 [2024-07-14 22:15:33.109293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:34.241 [2024-07-14 22:15:33.109601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:34.241 [2024-07-14 22:15:33.109893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:34.241 passed 00:07:34.241 Test: test_nvme_ctrlr_fail ...[2024-07-14 22:15:33.110475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:34.241 passed 00:07:34.241 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:34.241 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:34.241 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:34.241 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-14 22:15:33.112111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.498 passed 00:07:34.498 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:34.498 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:34.498 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:34.498 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-14 22:15:33.432642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.498 passed 00:07:34.498 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-14 22:15:33.440235] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.498 passed 00:07:34.499 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-14 22:15:33.441993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 [2024-07-14 22:15:33.442206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:34.499 passed 00:07:34.499 Test: test_alloc_io_qpair_fail ...[2024-07-14 22:15:33.443840] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 [2024-07-14 22:15:33.444090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:34.499 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:34.499 Test: test_nvme_ctrlr_set_state ...[2024-07-14 22:15:33.445057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-14 22:15:33.445448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-14 22:15:33.466901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-14 22:15:33.509811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_reset ...[2024-07-14 22:15:33.511698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_aer_callback ...[2024-07-14 22:15:33.512394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-14 22:15:33.514336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:34.499 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:34.499 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-14 22:15:33.516877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:34.499 Test: test_nvme_ctrlr_ana_resize ...[2024-07-14 22:15:33.518863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:34.499 Test: test_nvme_transport_ctrlr_ready ...[2024-07-14 22:15:33.521071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:34.499 [2024-07-14 22:15:33.521245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:34.499 passed 00:07:34.499 Test: test_nvme_ctrlr_disable ...[2024-07-14 22:15:33.521603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:34.499 passed 00:07:34.499 00:07:34.499 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.499 suites 1 1 n/a 0 0 00:07:34.499 tests 43 43 43 0 0 00:07:34.499 asserts 10418 10418 10418 0 n/a 00:07:34.499 00:07:34.499 Elapsed time = 0.392 seconds 00:07:34.499 22:15:33 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:34.757 00:07:34.757 00:07:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.757 http://cunit.sourceforge.net/ 00:07:34.757 00:07:34.757 00:07:34.757 Suite: nvme_ctrlr_cmd 00:07:34.757 Test: test_get_log_pages ...passed 00:07:34.757 Test: test_set_feature_cmd ...passed 00:07:34.757 Test: test_set_feature_ns_cmd ...passed 00:07:34.757 Test: test_get_feature_cmd ...passed 00:07:34.757 Test: test_get_feature_ns_cmd ...passed 00:07:34.757 Test: test_abort_cmd ...passed 00:07:34.757 Test: test_set_host_id_cmds ...[2024-07-14 22:15:33.565270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:34.757 passed 00:07:34.757 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:34.757 Test: test_io_raw_cmd ...passed 00:07:34.757 Test: test_io_raw_cmd_with_md ...passed 00:07:34.757 Test: test_namespace_attach ...passed 00:07:34.757 Test: test_namespace_detach ...passed 00:07:34.757 Test: test_namespace_create ...passed 00:07:34.757 Test: test_namespace_delete ...passed 00:07:34.757 Test: test_doorbell_buffer_config ...passed 00:07:34.757 Test: test_format_nvme ...passed 00:07:34.757 Test: test_fw_commit ...passed 00:07:34.757 Test: test_fw_image_download ...passed 00:07:34.757 Test: test_sanitize ...passed 00:07:34.757 Test: test_directive ...passed 00:07:34.757 Test: test_nvme_request_add_abort ...passed 00:07:34.757 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:34.757 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:34.757 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:34.757 00:07:34.757 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.757 suites 1 1 n/a 0 0 00:07:34.757 tests 24 24 24 0 0 00:07:34.757 asserts 198 198 198 0 n/a 00:07:34.757 00:07:34.757 Elapsed time = 0.001 seconds 00:07:34.757 22:15:33 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:34.757 00:07:34.757 00:07:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.757 http://cunit.sourceforge.net/ 00:07:34.757 00:07:34.757 00:07:34.757 Suite: nvme_ctrlr_cmd 00:07:34.757 Test: test_geometry_cmd ...passed 00:07:34.757 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:34.757 00:07:34.757 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.757 suites 1 1 n/a 0 0 00:07:34.757 tests 2 2 2 0 0 00:07:34.757 asserts 7 7 7 0 n/a 00:07:34.757 00:07:34.757 Elapsed time = 0.000 seconds 00:07:34.757 22:15:33 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:34.757 00:07:34.757 00:07:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.757 http://cunit.sourceforge.net/ 00:07:34.757 00:07:34.757 00:07:34.757 Suite: nvme 00:07:34.757 Test: test_nvme_ns_construct ...passed 00:07:34.757 Test: test_nvme_ns_uuid ...passed 00:07:34.757 Test: test_nvme_ns_csi ...passed 00:07:34.757 Test: test_nvme_ns_data ...passed 00:07:34.757 Test: test_nvme_ns_set_identify_data ...passed 00:07:34.757 Test: test_spdk_nvme_ns_get_values ...passed 00:07:34.757 Test: test_spdk_nvme_ns_is_active ...passed 00:07:34.757 Test: spdk_nvme_ns_supports ...passed 00:07:34.757 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:34.757 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:34.757 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:34.757 Test: test_nvme_ns_find_id_desc ...passed 00:07:34.757 00:07:34.757 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.757 suites 1 1 n/a 0 0 00:07:34.757 tests 12 12 12 0 0 00:07:34.757 asserts 83 83 83 0 n/a 00:07:34.757 00:07:34.757 Elapsed time = 0.001 seconds 00:07:34.757 22:15:33 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:34.757 00:07:34.757 00:07:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.757 http://cunit.sourceforge.net/ 00:07:34.757 00:07:34.757 00:07:34.757 Suite: nvme_ns_cmd 00:07:34.757 Test: split_test ...passed 00:07:34.757 Test: split_test2 ...passed 00:07:34.757 Test: split_test3 ...passed 00:07:34.757 Test: split_test4 ...passed 00:07:34.757 Test: test_nvme_ns_cmd_flush ...passed 00:07:34.757 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:34.757 Test: test_nvme_ns_cmd_copy ...passed 00:07:34.757 Test: test_io_flags ...[2024-07-14 22:15:33.667124] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:34.757 passed 00:07:34.757 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:34.757 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:34.757 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:34.757 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:34.757 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:34.757 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:34.757 Test: test_cmd_child_request ...passed 00:07:34.757 Test: test_nvme_ns_cmd_readv ...passed 00:07:34.757 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:34.757 Test: test_nvme_ns_cmd_writev ...[2024-07-14 22:15:33.670545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:34.757 passed 00:07:34.757 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:34.757 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:34.757 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:34.757 Test: test_nvme_ns_cmd_comparev ...passed 00:07:34.757 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:34.757 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:34.757 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:34.757 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:34.757 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:34.757 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-14 22:15:33.674096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:34.757 passed 00:07:34.757 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-14 22:15:33.674483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:34.757 passed 00:07:34.757 Test: test_nvme_ns_cmd_verify ...passed 00:07:34.757 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:34.757 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:34.757 00:07:34.757 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.757 suites 1 1 n/a 0 0 00:07:34.757 tests 32 32 32 0 0 00:07:34.757 asserts 550 550 550 0 n/a 00:07:34.757 00:07:34.757 Elapsed time = 0.005 seconds 00:07:34.757 22:15:33 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:34.757 00:07:34.757 00:07:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.757 http://cunit.sourceforge.net/ 00:07:34.757 00:07:34.757 00:07:34.757 Suite: nvme_ns_cmd 00:07:34.757 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:34.757 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:34.757 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:34.757 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:34.757 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:34.757 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:34.758 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:34.758 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:34.758 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:34.758 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:34.758 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:34.758 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:34.758 00:07:34.758 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.758 suites 1 1 n/a 0 0 00:07:34.758 tests 12 12 12 0 0 00:07:34.758 asserts 123 123 123 0 n/a 00:07:34.758 00:07:34.758 Elapsed time = 0.001 seconds 00:07:34.758 22:15:33 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:34.758 00:07:34.758 00:07:34.758 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.758 http://cunit.sourceforge.net/ 00:07:34.758 00:07:34.758 00:07:34.758 Suite: nvme_qpair 00:07:34.758 Test: test3 ...passed 00:07:34.758 Test: test_ctrlr_failed ...passed 00:07:34.758 Test: struct_packing ...passed 00:07:34.758 Test: test_nvme_qpair_process_completions ...[2024-07-14 22:15:33.741009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:34.758 [2024-07-14 22:15:33.741535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:34.758 [2024-07-14 22:15:33.741709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:34.758 [2024-07-14 22:15:33.741925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:34.758 passed 00:07:34.758 Test: test_nvme_completion_is_retry ...passed 00:07:34.758 Test: test_get_status_string ...passed 00:07:34.758 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:34.758 Test: test_nvme_qpair_submit_request ...passed 00:07:34.758 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:34.758 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:34.758 Test: test_nvme_qpair_init_deinit ...[2024-07-14 22:15:33.743506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:34.758 passed 00:07:34.758 Test: test_nvme_get_sgl_print_info ...passed 00:07:34.758 00:07:34.758 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.758 suites 1 1 n/a 0 0 00:07:34.758 tests 12 12 12 0 0 00:07:34.758 asserts 154 154 154 0 n/a 00:07:34.758 00:07:34.758 Elapsed time = 0.002 seconds 00:07:34.758 22:15:33 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:34.758 00:07:34.758 00:07:34.758 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.758 http://cunit.sourceforge.net/ 00:07:34.758 00:07:34.758 00:07:34.758 Suite: nvme_pcie 00:07:34.758 Test: test_prp_list_append ...[2024-07-14 22:15:33.778110] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:34.758 [2024-07-14 22:15:33.778458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:34.758 [2024-07-14 22:15:33.778506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:34.758 [2024-07-14 22:15:33.778797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:34.758 passed 00:07:34.758 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-14 22:15:33.778910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:34.758 passed 00:07:34.758 Test: test_shadow_doorbell_update ...passed 00:07:34.758 Test: test_build_contig_hw_sgl_request ...passed 00:07:34.758 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:34.758 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:34.758 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:34.758 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-14 22:15:33.779091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:34.758 passed 00:07:34.758 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:34.758 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:34.758 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:07:34.758 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-14 22:15:33.779193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:34.758 [2024-07-14 22:15:33.779276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:34.758 passed 00:07:34.758 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:07:34.758 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:34.758 00:07:34.758 [2024-07-14 22:15:33.779349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:34.758 [2024-07-14 22:15:33.779394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:34.758 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.758 suites 1 1 n/a 0 0 00:07:34.758 tests 14 14 14 0 0 00:07:34.758 asserts 235 235 235 0 n/a 00:07:34.758 00:07:34.758 Elapsed time = 0.001 seconds 00:07:34.758 22:15:33 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:34.758 00:07:34.758 00:07:34.758 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.758 http://cunit.sourceforge.net/ 00:07:34.758 00:07:34.758 00:07:34.758 Suite: nvme_ns_cmd 00:07:34.758 Test: nvme_poll_group_create_test ...passed 00:07:34.758 Test: nvme_poll_group_add_remove_test ...passed 00:07:34.758 Test: nvme_poll_group_process_completions ...passed 00:07:34.758 Test: nvme_poll_group_destroy_test ...passed 00:07:34.758 Test: nvme_poll_group_get_free_stats ...passed 00:07:34.758 00:07:34.758 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.758 suites 1 1 n/a 0 0 00:07:34.758 tests 5 5 5 0 0 00:07:34.758 asserts 75 75 75 0 n/a 00:07:34.758 00:07:34.758 Elapsed time = 0.000 seconds 00:07:35.018 22:15:33 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:35.018 00:07:35.018 00:07:35.018 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.018 http://cunit.sourceforge.net/ 00:07:35.018 00:07:35.018 00:07:35.018 Suite: nvme_quirks 00:07:35.018 Test: test_nvme_quirks_striping ...passed 00:07:35.018 00:07:35.018 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.018 suites 1 1 n/a 0 0 00:07:35.018 tests 1 1 1 0 0 00:07:35.018 asserts 5 5 5 0 n/a 00:07:35.018 00:07:35.018 Elapsed time = 0.000 seconds 00:07:35.018 22:15:33 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:35.018 00:07:35.018 00:07:35.018 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.018 http://cunit.sourceforge.net/ 00:07:35.018 00:07:35.018 00:07:35.018 Suite: nvme_tcp 00:07:35.018 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:35.018 Test: test_nvme_tcp_build_iovs ...passed 00:07:35.018 Test: test_nvme_tcp_build_sgl_request ...[2024-07-14 22:15:33.871207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd0f270950, and the iovcnt=16, remaining_size=28672 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:35.018 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:35.018 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:35.018 Test: test_nvme_tcp_req_get ...passed 00:07:35.018 Test: test_nvme_tcp_req_init ...passed 00:07:35.018 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:35.018 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:35.018 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:35.018 Test: test_nvme_tcp_alloc_reqs ...[2024-07-14 22:15:33.871845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f272670 is same with the state(6) to be set 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-14 22:15:33.872188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271800 is same with the state(5) to be set 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-14 22:15:33.872255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd0f272330 00:07:35.018 [2024-07-14 22:15:33.872299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:35.018 [2024-07-14 22:15:33.872379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:35.018 [2024-07-14 22:15:33.872541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:35.018 [2024-07-14 22:15:33.872602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-14 22:15:33.872762] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271cc0 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.872942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:35.018 [2024-07-14 22:15:33.872989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:35.018 [2024-07-14 22:15:33.873203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:35.018 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-14 22:15:33.873306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd0f271e70): PDU Sequence Error 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_icresp_handle ...[2024-07-14 22:15:33.873405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:35.018 [2024-07-14 22:15:33.873434] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:35.018 [2024-07-14 22:15:33.873465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271810 is same with the state(5) to be set 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-14 22:15:33.873505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:35.018 [2024-07-14 22:15:33.873536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271810 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.873577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f271810 is same with the state(0) to be set 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:07:35.018 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-07-14 22:15:33.873646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd0f272330): PDU Sequence Error 00:07:35.018 [2024-07-14 22:15:33.873737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd0f270af0 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-14 22:15:33.873874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd0f270170, errno=0, rc=0 00:07:35.018 [2024-07-14 22:15:33.873927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f270170 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.873984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd0f270170 is same with the state(5) to be set 00:07:35.018 [2024-07-14 22:15:33.874021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd0f270170 (0): Success 00:07:35.018 [2024-07-14 22:15:33.874053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd0f270170 (0): Success 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-14 22:15:33.985963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:35.018 [2024-07-14 22:15:33.986095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:35.018 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-14 22:15:33.986349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:35.018 [2024-07-14 22:15:33.986401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-14 22:15:33.986622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:35.018 [2024-07-14 22:15:33.986668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:35.018 [2024-07-14 22:15:33.986772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:35.018 [2024-07-14 22:15:33.986847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:35.018 [2024-07-14 22:15:33.986951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:07:35.018 passed 00:07:35.018 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-14 22:15:33.987017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:35.018 [2024-07-14 22:15:33.987166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:07:35.018 [2024-07-14 22:15:33.987202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:35.018 passed 00:07:35.018 00:07:35.018 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.018 suites 1 1 n/a 0 0 00:07:35.018 tests 27 27 27 0 0 00:07:35.018 asserts 624 624 624 0 n/a 00:07:35.018 00:07:35.018 Elapsed time = 0.116 seconds 00:07:35.018 22:15:34 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:35.018 00:07:35.018 00:07:35.018 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.018 http://cunit.sourceforge.net/ 00:07:35.018 00:07:35.018 00:07:35.018 Suite: nvme_transport 00:07:35.018 Test: test_nvme_get_transport ...passed 00:07:35.018 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:35.018 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:35.019 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:35.019 Test: test_ctrlr_get_memory_domains ...passed 00:07:35.019 00:07:35.019 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.019 suites 1 1 n/a 0 0 00:07:35.019 tests 5 5 5 0 0 00:07:35.019 asserts 28 28 28 0 n/a 00:07:35.019 00:07:35.019 Elapsed time = 0.000 seconds 00:07:35.019 22:15:34 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:35.019 00:07:35.019 00:07:35.019 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.019 http://cunit.sourceforge.net/ 00:07:35.019 00:07:35.019 00:07:35.019 Suite: nvme_io_msg 00:07:35.019 Test: test_nvme_io_msg_send ...passed 00:07:35.019 Test: test_nvme_io_msg_process ...passed 00:07:35.019 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:35.019 00:07:35.019 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.019 suites 1 1 n/a 0 0 00:07:35.019 tests 3 3 3 0 0 00:07:35.019 asserts 56 56 56 0 n/a 00:07:35.019 00:07:35.019 Elapsed time = 0.000 seconds 00:07:35.019 22:15:34 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:35.278 00:07:35.278 00:07:35.278 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.278 http://cunit.sourceforge.net/ 00:07:35.278 00:07:35.278 00:07:35.278 Suite: nvme_pcie_common 00:07:35.278 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-14 22:15:34.091473] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:35.278 passed 00:07:35.278 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:35.278 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:35.278 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-14 22:15:34.092273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:35.278 [2024-07-14 22:15:34.092397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:35.278 [2024-07-14 22:15:34.092434] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:35.278 passed 00:07:35.278 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:35.278 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:07:35.278 00:07:35.278 [2024-07-14 22:15:34.092886] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:35.278 [2024-07-14 22:15:34.092931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:35.278 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.278 suites 1 1 n/a 0 0 00:07:35.278 tests 6 6 6 0 0 00:07:35.278 asserts 148 148 148 0 n/a 00:07:35.278 00:07:35.278 Elapsed time = 0.002 seconds 00:07:35.278 22:15:34 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:35.278 00:07:35.278 00:07:35.278 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.278 http://cunit.sourceforge.net/ 00:07:35.278 00:07:35.278 00:07:35.278 Suite: nvme_fabric 00:07:35.278 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:35.278 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:35.278 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:35.278 Test: test_nvme_fabric_discover_probe ...passed 00:07:35.278 Test: test_nvme_fabric_qpair_connect ...[2024-07-14 22:15:34.123974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:35.278 passed 00:07:35.278 00:07:35.278 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.278 suites 1 1 n/a 0 0 00:07:35.278 tests 5 5 5 0 0 00:07:35.278 asserts 60 60 60 0 n/a 00:07:35.278 00:07:35.278 Elapsed time = 0.001 seconds 00:07:35.278 22:15:34 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:35.278 00:07:35.278 00:07:35.278 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.278 http://cunit.sourceforge.net/ 00:07:35.278 00:07:35.278 00:07:35.278 Suite: nvme_opal 00:07:35.278 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:35.278 Test: test_opal_add_short_atom_header ...[2024-07-14 22:15:34.159561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:35.278 passed 00:07:35.278 00:07:35.278 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.278 suites 1 1 n/a 0 0 00:07:35.278 tests 2 2 2 0 0 00:07:35.278 asserts 22 22 22 0 n/a 00:07:35.278 00:07:35.278 Elapsed time = 0.001 seconds 00:07:35.278 00:07:35.278 real 0m1.277s 00:07:35.278 user 0m0.681s 00:07:35.278 sys 0m0.413s 00:07:35.278 22:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.278 22:15:34 -- common/autotest_common.sh@10 -- # set +x 00:07:35.278 ************************************ 00:07:35.278 END TEST unittest_nvme 00:07:35.278 ************************************ 00:07:35.278 22:15:34 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:35.278 22:15:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:35.278 22:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.278 22:15:34 -- common/autotest_common.sh@10 -- # set +x 00:07:35.278 ************************************ 00:07:35.278 START TEST unittest_log 00:07:35.278 ************************************ 00:07:35.278 22:15:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:35.278 00:07:35.278 00:07:35.278 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.278 http://cunit.sourceforge.net/ 00:07:35.278 00:07:35.278 00:07:35.278 Suite: log 00:07:35.278 Test: log_test ...[2024-07-14 22:15:34.241496] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:35.278 passed 00:07:35.278 Test: deprecation ...[2024-07-14 22:15:34.241769] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:35.278 log dump test: 00:07:35.278 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:35.278 spdk dump test: 00:07:35.278 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:35.278 spdk dump test: 00:07:35.278 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:35.278 00000010 65 20 63 68 61 72 73 e chars 00:07:36.213 passed 00:07:36.213 00:07:36.213 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.213 suites 1 1 n/a 0 0 00:07:36.213 tests 2 2 2 0 0 00:07:36.213 asserts 73 73 73 0 n/a 00:07:36.213 00:07:36.213 Elapsed time = 0.001 seconds 00:07:36.213 00:07:36.213 real 0m1.034s 00:07:36.213 user 0m0.015s 00:07:36.213 sys 0m0.020s 00:07:36.213 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.213 ************************************ 00:07:36.213 END TEST unittest_log 00:07:36.213 ************************************ 00:07:36.213 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.472 22:15:35 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:36.472 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.472 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.472 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.472 ************************************ 00:07:36.472 START TEST unittest_lvol 00:07:36.472 ************************************ 00:07:36.472 22:15:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:36.472 00:07:36.472 00:07:36.472 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.472 http://cunit.sourceforge.net/ 00:07:36.472 00:07:36.472 00:07:36.472 Suite: lvol 00:07:36.472 Test: lvs_init_unload_success ...[2024-07-14 22:15:35.333219] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:36.472 passed 00:07:36.472 Test: lvs_init_destroy_success ...[2024-07-14 22:15:35.333826] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:36.472 passed 00:07:36.472 Test: lvs_init_opts_success ...passed 00:07:36.472 Test: lvs_unload_lvs_is_null_fail ...passed 00:07:36.472 Test: lvs_names ...[2024-07-14 22:15:35.334060] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:36.472 [2024-07-14 22:15:35.334110] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:36.472 [2024-07-14 22:15:35.334138] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:36.472 [2024-07-14 22:15:35.334318] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:36.472 passed 00:07:36.472 Test: lvol_create_destroy_success ...passed 00:07:36.472 Test: lvol_create_fail ...[2024-07-14 22:15:35.334888] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:36.472 [2024-07-14 22:15:35.335006] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:36.472 passed 00:07:36.472 Test: lvol_destroy_fail ...[2024-07-14 22:15:35.335306] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:36.472 passed 00:07:36.472 Test: lvol_close ...[2024-07-14 22:15:35.335519] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:36.472 passed 00:07:36.472 Test: lvol_resize ...[2024-07-14 22:15:35.335575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:36.472 passed 00:07:36.472 Test: lvol_set_read_only ...passed 00:07:36.472 Test: test_lvs_load ...[2024-07-14 22:15:35.336415] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:36.472 passed 00:07:36.472 Test: lvols_load ...[2024-07-14 22:15:35.336463] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:36.472 [2024-07-14 22:15:35.336722] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:36.472 [2024-07-14 22:15:35.336859] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:36.472 passed 00:07:36.472 Test: lvol_open ...passed 00:07:36.472 Test: lvol_snapshot ...passed 00:07:36.472 Test: lvol_snapshot_fail ...[2024-07-14 22:15:35.337608] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:36.472 passed 00:07:36.472 Test: lvol_clone ...passed 00:07:36.472 Test: lvol_clone_fail ...[2024-07-14 22:15:35.338224] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:36.472 passed 00:07:36.472 Test: lvol_iter_clones ...passed 00:07:36.472 Test: lvol_refcnt ...[2024-07-14 22:15:35.338783] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 404b29bd-74d5-4ae3-b3ae-0e24274f23a9 because it is still open 00:07:36.472 passed 00:07:36.472 Test: lvol_names ...[2024-07-14 22:15:35.339016] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:36.472 [2024-07-14 22:15:35.339115] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:36.472 [2024-07-14 22:15:35.339339] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:36.472 passed 00:07:36.472 Test: lvol_create_thin_provisioned ...passed 00:07:36.472 Test: lvol_rename ...[2024-07-14 22:15:35.339836] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:36.472 [2024-07-14 22:15:35.339941] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:36.472 passed 00:07:36.472 Test: lvs_rename ...[2024-07-14 22:15:35.340192] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:36.472 passed 00:07:36.472 Test: lvol_inflate ...[2024-07-14 22:15:35.340419] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:36.472 passed 00:07:36.472 Test: lvol_decouple_parent ...[2024-07-14 22:15:35.340696] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:36.472 passed 00:07:36.472 Test: lvol_get_xattr ...passed 00:07:36.472 Test: lvol_esnap_reload ...passed 00:07:36.472 Test: lvol_esnap_create_bad_args ...[2024-07-14 22:15:35.341174] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:36.472 [2024-07-14 22:15:35.341219] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:36.472 [2024-07-14 22:15:35.341257] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:36.472 [2024-07-14 22:15:35.341378] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:36.472 passed 00:07:36.472 Test: lvol_esnap_create_delete ...[2024-07-14 22:15:35.341497] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:36.472 passed 00:07:36.472 Test: lvol_esnap_load_esnaps ...[2024-07-14 22:15:35.341800] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:36.472 passed 00:07:36.472 Test: lvol_esnap_missing ...[2024-07-14 22:15:35.341975] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:36.472 [2024-07-14 22:15:35.342034] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:36.472 passed 00:07:36.472 Test: lvol_esnap_hotplug ... 00:07:36.472 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:36.472 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:36.472 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:36.472 [2024-07-14 22:15:35.342688] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f4fb706e-a34b-4bab-94b7-970061b37a6a: failed to create esnap bs_dev: error -12 00:07:36.472 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:36.472 [2024-07-14 22:15:35.342924] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 26db9a3b-3494-4f97-8ae4-f8910619613c: failed to create esnap bs_dev: error -12 00:07:36.472 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:36.472 [2024-07-14 22:15:35.343070] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 780e5803-d66c-406c-996b-0b9df0bc0130: failed to create esnap bs_dev: error -12 00:07:36.472 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:36.472 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:36.472 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:36.472 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:36.472 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:36.472 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:36.472 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:36.472 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:36.472 passed 00:07:36.472 Test: lvol_get_by ...passed 00:07:36.472 00:07:36.472 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.472 suites 1 1 n/a 0 0 00:07:36.472 tests 34 34 34 0 0 00:07:36.472 asserts 1439 1439 1439 0 n/a 00:07:36.472 00:07:36.472 Elapsed time = 0.012 seconds 00:07:36.472 00:07:36.472 real 0m0.050s 00:07:36.472 user 0m0.037s 00:07:36.472 sys 0m0.013s 00:07:36.472 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.472 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.472 ************************************ 00:07:36.472 END TEST unittest_lvol 00:07:36.472 ************************************ 00:07:36.472 22:15:35 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:36.472 22:15:35 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:36.472 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.472 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.472 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.472 ************************************ 00:07:36.472 START TEST unittest_nvme_rdma 00:07:36.472 ************************************ 00:07:36.473 22:15:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:36.473 00:07:36.473 00:07:36.473 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.473 http://cunit.sourceforge.net/ 00:07:36.473 00:07:36.473 00:07:36.473 Suite: nvme_rdma 00:07:36.473 Test: test_nvme_rdma_build_sgl_request ...[2024-07-14 22:15:35.432463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:36.473 [2024-07-14 22:15:35.432877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-14 22:15:35.432986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_build_contig_request ...[2024-07-14 22:15:35.433082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:36.473 Test: test_nvme_rdma_create_reqs ...[2024-07-14 22:15:35.433202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_create_rsps ...[2024-07-14 22:15:35.433542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-14 22:15:35.433725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_poller_create ...[2024-07-14 22:15:35.433775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-14 22:15:35.433953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_ctrlr_construct ...passed 00:07:36.473 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:36.473 Test: test_nvme_rdma_req_init ...passed 00:07:36.473 Test: test_nvme_rdma_validate_cm_event ...passed 00:07:36.473 Test: test_nvme_rdma_qpair_init ...[2024-07-14 22:15:35.434241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:36.473 [2024-07-14 22:15:35.434280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:36.473 Test: test_nvme_rdma_memory_domain ...[2024-07-14 22:15:35.434513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:36.473 passed 00:07:36.473 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:36.473 Test: test_rdma_get_memory_translation ...[2024-07-14 22:15:35.434624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:36.473 [2024-07-14 22:15:35.434679] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:36.473 passed 00:07:36.473 Test: test_get_rdma_qpair_from_wc ...passed 00:07:36.473 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:36.473 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-14 22:15:35.434766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:36.473 passed 00:07:36.473 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-14 22:15:35.434801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:36.473 [2024-07-14 22:15:35.434907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:36.473 [2024-07-14 22:15:35.434956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:36.473 [2024-07-14 22:15:35.434986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe3d6ed8f0 on poll group 0x60b0000001a0 00:07:36.473 [2024-07-14 22:15:35.435041] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:36.473 [2024-07-14 22:15:35.435081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:36.473 [2024-07-14 22:15:35.435105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe3d6ed8f0 on poll group 0x60b0000001a0 00:07:36.473 passed[2024-07-14 22:15:35.435181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:36.473 00:07:36.473 00:07:36.473 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.473 suites 1 1 n/a 0 0 00:07:36.473 tests 22 22 22 0 0 00:07:36.473 asserts 412 412 412 0 n/a 00:07:36.473 00:07:36.473 Elapsed time = 0.003 seconds 00:07:36.473 00:07:36.473 real 0m0.036s 00:07:36.473 user 0m0.028s 00:07:36.473 sys 0m0.009s 00:07:36.473 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.473 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.473 ************************************ 00:07:36.473 END TEST unittest_nvme_rdma 00:07:36.473 ************************************ 00:07:36.473 22:15:35 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:36.473 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.473 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.473 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.473 ************************************ 00:07:36.473 START TEST unittest_nvmf_transport 00:07:36.473 ************************************ 00:07:36.473 22:15:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:36.473 00:07:36.473 00:07:36.473 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.473 http://cunit.sourceforge.net/ 00:07:36.473 00:07:36.473 00:07:36.473 Suite: nvmf 00:07:36.473 Test: test_spdk_nvmf_transport_create ...[2024-07-14 22:15:35.521669] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:36.473 [2024-07-14 22:15:35.522057] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:36.473 [2024-07-14 22:15:35.522125] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:36.473 [2024-07-14 22:15:35.522246] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:36.473 passed 00:07:36.473 Test: test_nvmf_transport_poll_group_create ...passed 00:07:36.473 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-14 22:15:35.522494] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:36.473 passed 00:07:36.473 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-14 22:15:35.522581] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:36.473 [2024-07-14 22:15:35.522605] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:36.473 passed 00:07:36.473 00:07:36.473 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.473 suites 1 1 n/a 0 0 00:07:36.473 tests 4 4 4 0 0 00:07:36.473 asserts 49 49 49 0 n/a 00:07:36.473 00:07:36.473 Elapsed time = 0.001 seconds 00:07:36.732 00:07:36.732 real 0m0.043s 00:07:36.732 user 0m0.027s 00:07:36.732 sys 0m0.016s 00:07:36.732 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.732 ************************************ 00:07:36.732 END TEST unittest_nvmf_transport 00:07:36.732 ************************************ 00:07:36.732 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.732 22:15:35 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:36.732 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.732 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.732 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.732 ************************************ 00:07:36.732 START TEST unittest_rdma 00:07:36.732 ************************************ 00:07:36.732 22:15:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:36.732 00:07:36.732 00:07:36.732 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.732 http://cunit.sourceforge.net/ 00:07:36.732 00:07:36.732 00:07:36.732 Suite: rdma_common 00:07:36.732 Test: test_spdk_rdma_pd ...[2024-07-14 22:15:35.605735] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:36.732 passed 00:07:36.732 00:07:36.732 [2024-07-14 22:15:35.606071] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:36.732 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.732 suites 1 1 n/a 0 0 00:07:36.732 tests 1 1 1 0 0 00:07:36.732 asserts 31 31 31 0 n/a 00:07:36.732 00:07:36.732 Elapsed time = 0.001 seconds 00:07:36.732 00:07:36.732 real 0m0.035s 00:07:36.732 user 0m0.029s 00:07:36.732 sys 0m0.004s 00:07:36.732 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.732 ************************************ 00:07:36.732 END TEST unittest_rdma 00:07:36.732 ************************************ 00:07:36.732 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.732 22:15:35 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:36.732 22:15:35 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:36.732 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.732 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.732 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.732 ************************************ 00:07:36.732 START TEST unittest_nvme_cuse 00:07:36.732 ************************************ 00:07:36.732 22:15:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:36.732 00:07:36.732 00:07:36.732 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.732 http://cunit.sourceforge.net/ 00:07:36.732 00:07:36.732 00:07:36.732 Suite: nvme_cuse 00:07:36.732 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:36.732 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:36.732 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:36.732 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:36.732 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:36.732 Test: test_cuse_nvme_submit_io ...[2024-07-14 22:15:35.691138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:36.732 passed 00:07:36.732 Test: test_cuse_nvme_reset ...[2024-07-14 22:15:35.691617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:36.732 passed 00:07:36.732 Test: test_nvme_cuse_stop ...passed 00:07:36.732 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:36.732 00:07:36.732 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.732 suites 1 1 n/a 0 0 00:07:36.732 tests 9 9 9 0 0 00:07:36.732 asserts 121 121 121 0 n/a 00:07:36.732 00:07:36.732 Elapsed time = 0.002 seconds 00:07:36.732 00:07:36.732 real 0m0.035s 00:07:36.732 user 0m0.022s 00:07:36.732 sys 0m0.013s 00:07:36.732 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.732 ************************************ 00:07:36.732 END TEST unittest_nvme_cuse 00:07:36.732 ************************************ 00:07:36.732 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.732 22:15:35 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:07:36.732 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.732 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.732 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.732 ************************************ 00:07:36.733 START TEST unittest_nvmf 00:07:36.733 ************************************ 00:07:36.733 22:15:35 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:07:36.733 22:15:35 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:36.733 00:07:36.733 00:07:36.733 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.733 http://cunit.sourceforge.net/ 00:07:36.733 00:07:36.733 00:07:36.733 Suite: nvmf 00:07:36.733 Test: test_get_log_page ...[2024-07-14 22:15:35.778669] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:36.733 passed 00:07:36.733 Test: test_process_fabrics_cmd ...passed 00:07:36.733 Test: test_connect ...[2024-07-14 22:15:35.779949] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:36.733 [2024-07-14 22:15:35.780205] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:36.733 [2024-07-14 22:15:35.780359] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:36.733 [2024-07-14 22:15:35.780517] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:36.733 [2024-07-14 22:15:35.780720] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:36.733 [2024-07-14 22:15:35.780873] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:36.733 [2024-07-14 22:15:35.781085] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:36.733 [2024-07-14 22:15:35.781250] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:36.733 [2024-07-14 22:15:35.781509] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:36.733 [2024-07-14 22:15:35.781717] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:36.733 [2024-07-14 22:15:35.782121] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:36.733 [2024-07-14 22:15:35.782332] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:36.733 [2024-07-14 22:15:35.782538] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:36.733 [2024-07-14 22:15:35.782722] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:36.733 [2024-07-14 22:15:35.782953] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:36.733 [2024-07-14 22:15:35.783227] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:36.733 passed 00:07:36.733 Test: test_get_ns_id_desc_list ...passed 00:07:36.733 Test: test_identify_ns ...[2024-07-14 22:15:35.783979] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.733 [2024-07-14 22:15:35.784292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:36.733 [2024-07-14 22:15:35.784570] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:36.733 passed 00:07:36.733 Test: test_identify_ns_iocs_specific ...[2024-07-14 22:15:35.784973] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.733 [2024-07-14 22:15:35.785368] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.733 passed 00:07:36.733 Test: test_reservation_write_exclusive ...passed 00:07:36.733 Test: test_reservation_exclusive_access ...passed 00:07:36.733 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:36.733 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:36.733 Test: test_reservation_notification_log_page ...passed 00:07:36.733 Test: test_get_dif_ctx ...passed 00:07:36.733 Test: test_set_get_features ...[2024-07-14 22:15:35.787255] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:36.733 [2024-07-14 22:15:35.787424] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:36.733 [2024-07-14 22:15:35.787593] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:36.733 [2024-07-14 22:15:35.787758] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:36.733 passed 00:07:36.733 Test: test_identify_ctrlr ...passed 00:07:36.733 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:36.733 Test: test_custom_admin_cmd ...passed 00:07:36.733 Test: test_fused_compare_and_write ...[2024-07-14 22:15:35.789060] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:36.733 [2024-07-14 22:15:35.789213] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:36.733 [2024-07-14 22:15:35.789365] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:36.733 passed 00:07:36.733 Test: test_multi_async_event_reqs ...passed 00:07:36.733 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:36.733 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:36.733 Test: test_multi_async_events ...passed 00:07:36.733 Test: test_rae ...passed 00:07:36.733 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:36.733 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:36.733 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-14 22:15:35.791318] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:36.733 passed 00:07:36.733 Test: test_zcopy_read ...passed 00:07:36.733 Test: test_zcopy_write ...passed 00:07:36.992 Test: test_nvmf_property_set ...passed 00:07:36.992 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-14 22:15:35.792342] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:36.992 [2024-07-14 22:15:35.792548] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:36.992 passed 00:07:36.992 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-14 22:15:35.792878] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:36.992 [2024-07-14 22:15:35.793037] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:36.992 [2024-07-14 22:15:35.793174] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:36.992 passed 00:07:36.992 00:07:36.992 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.992 suites 1 1 n/a 0 0 00:07:36.992 tests 30 30 30 0 0 00:07:36.992 asserts 885 885 885 0 n/a 00:07:36.992 00:07:36.992 Elapsed time = 0.007 seconds 00:07:36.992 22:15:35 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:36.992 00:07:36.992 00:07:36.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.992 http://cunit.sourceforge.net/ 00:07:36.992 00:07:36.992 00:07:36.992 Suite: nvmf 00:07:36.992 Test: test_get_rw_params ...passed 00:07:36.992 Test: test_lba_in_range ...passed 00:07:36.992 Test: test_get_dif_ctx ...passed 00:07:36.992 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:36.992 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-14 22:15:35.827110] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:36.992 [2024-07-14 22:15:35.827444] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:36.992 passed 00:07:36.992 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-14 22:15:35.827537] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:36.992 [2024-07-14 22:15:35.827589] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:36.992 passed 00:07:36.992 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-14 22:15:35.827662] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:36.992 [2024-07-14 22:15:35.827751] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:36.992 [2024-07-14 22:15:35.827806] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:36.992 [2024-07-14 22:15:35.827860] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:36.992 [2024-07-14 22:15:35.827888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:36.992 passed 00:07:36.992 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:36.992 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:36.992 00:07:36.992 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.992 suites 1 1 n/a 0 0 00:07:36.992 tests 9 9 9 0 0 00:07:36.992 asserts 157 157 157 0 n/a 00:07:36.992 00:07:36.992 Elapsed time = 0.001 seconds 00:07:36.992 22:15:35 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:36.992 00:07:36.992 00:07:36.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.992 http://cunit.sourceforge.net/ 00:07:36.992 00:07:36.992 00:07:36.992 Suite: nvmf 00:07:36.992 Test: test_discovery_log ...passed 00:07:36.992 Test: test_discovery_log_with_filters ...passed 00:07:36.992 00:07:36.992 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.992 suites 1 1 n/a 0 0 00:07:36.992 tests 2 2 2 0 0 00:07:36.992 asserts 238 238 238 0 n/a 00:07:36.992 00:07:36.992 Elapsed time = 0.003 seconds 00:07:36.992 22:15:35 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:36.992 00:07:36.992 00:07:36.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.992 http://cunit.sourceforge.net/ 00:07:36.992 00:07:36.992 00:07:36.992 Suite: nvmf 00:07:36.993 Test: nvmf_test_create_subsystem ...[2024-07-14 22:15:35.907224] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:36.993 [2024-07-14 22:15:35.907674] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:36.993 [2024-07-14 22:15:35.907797] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:36.993 [2024-07-14 22:15:35.907837] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:36.993 [2024-07-14 22:15:35.907870] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:36.993 [2024-07-14 22:15:35.907903] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:36.993 [2024-07-14 22:15:35.908011] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:36.993 [2024-07-14 22:15:35.908167] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:36.993 passed 00:07:36.993 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-14 22:15:35.908281] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:36.993 [2024-07-14 22:15:35.908316] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:36.993 [2024-07-14 22:15:35.908339] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:36.993 [2024-07-14 22:15:35.908524] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:36.993 [2024-07-14 22:15:35.908638] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:36.993 passed 00:07:36.993 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:36.993 Test: test_reservation_register ...[2024-07-14 22:15:35.908911] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 [2024-07-14 22:15:35.909048] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:36.993 passed 00:07:36.993 Test: test_reservation_register_with_ptpl ...passed 00:07:36.993 Test: test_reservation_acquire_preempt_1 ...[2024-07-14 22:15:35.910057] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:36.993 Test: test_reservation_release ...[2024-07-14 22:15:35.912018] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_reservation_unregister_notification ...[2024-07-14 22:15:35.912293] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_reservation_release_notification ...[2024-07-14 22:15:35.912589] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_reservation_release_notification_write_exclusive ...[2024-07-14 22:15:35.912825] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_reservation_clear_notification ...[2024-07-14 22:15:35.913038] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_reservation_preempt_notification ...[2024-07-14 22:15:35.913276] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:36.993 passed 00:07:36.993 Test: test_spdk_nvmf_ns_event ...passed 00:07:36.993 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:36.993 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:36.993 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-14 22:15:35.913957] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:36.993 [2024-07-14 22:15:35.914064] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:36.993 passed 00:07:36.993 Test: test_nvmf_ns_reservation_report ...[2024-07-14 22:15:35.914198] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:36.993 passed 00:07:36.993 Test: test_nvmf_nqn_is_valid ...[2024-07-14 22:15:35.914275] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:36.993 [2024-07-14 22:15:35.914310] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:397fe450-3ae5-4bf9-b0f3-f0b22595321": uuid is not the correct length 00:07:36.993 [2024-07-14 22:15:35.914339] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:36.993 passed 00:07:36.993 Test: test_nvmf_ns_reservation_restore ...[2024-07-14 22:15:35.914466] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:36.993 passed 00:07:36.993 Test: test_nvmf_subsystem_state_change ...passed 00:07:36.993 Test: test_nvmf_reservation_custom_ops ...passed 00:07:36.993 00:07:36.993 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.993 suites 1 1 n/a 0 0 00:07:36.993 tests 22 22 22 0 0 00:07:36.993 asserts 407 407 407 0 n/a 00:07:36.993 00:07:36.993 Elapsed time = 0.008 seconds 00:07:36.993 22:15:35 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:36.993 00:07:36.993 00:07:36.993 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.993 http://cunit.sourceforge.net/ 00:07:36.993 00:07:36.993 00:07:36.993 Suite: nvmf 00:07:36.993 Test: test_nvmf_tcp_create ...[2024-07-14 22:15:35.976382] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:36.993 passed 00:07:36.993 Test: test_nvmf_tcp_destroy ...passed 00:07:36.993 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:36.993 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:36.993 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:36.993 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:37.252 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:37.252 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-14 22:15:36.073958] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:37.252 Test: test_nvmf_tcp_icreq_handle ...[2024-07-14 22:15:36.074047] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074126] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074183] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.074218] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074310] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:37.252 [2024-07-14 22:15:36.074395] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.074445] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:37.252 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-14 22:15:36.074476] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:37.252 [2024-07-14 22:15:36.074506] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074528] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.074557] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074592] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.074635] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074693] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:37.252 [2024-07-14 22:15:36.074728] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.074751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc835522d0 is same with the state(5) to be set 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-14 22:15:36.074789] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc83553030 00:07:37.252 [2024-07-14 22:15:36.074862] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.074915] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.074950] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc83552790 00:07:37.252 [2024-07-14 22:15:36.074974] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075010] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075035] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:37.252 [2024-07-14 22:15:36.075068] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075115] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075149] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:37.252 [2024-07-14 22:15:36.075174] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075202] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075229] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075257] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075307] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075345] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075384] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075411] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075444] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075470] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 [2024-07-14 22:15:36.075515] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075537] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-14 22:15:36.075572] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:37.252 [2024-07-14 22:15:36.075593] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc83552790 is same with the state(5) to be set 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-14 22:15:36.093141] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:37.252 [2024-07-14 22:15:36.093199] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-14 22:15:36.093424] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:37.252 [2024-07-14 22:15:36.093453] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:37.252 passed 00:07:37.252 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:07:37.252 00:07:37.252 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.252 suites 1 1 n/a 0 0 00:07:37.252 tests 17 17 17 0 0 00:07:37.252 asserts 222 222 222 0 n/a 00:07:37.252 00:07:37.252 Elapsed time = 0.140 seconds 00:07:37.252 [2024-07-14 22:15:36.093595] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:37.252 [2024-07-14 22:15:36.093640] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:37.252 22:15:36 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:37.252 00:07:37.252 00:07:37.252 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.252 http://cunit.sourceforge.net/ 00:07:37.252 00:07:37.252 00:07:37.252 Suite: nvmf 00:07:37.252 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:37.252 00:07:37.252 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.252 suites 1 1 n/a 0 0 00:07:37.252 tests 1 1 1 0 0 00:07:37.252 asserts 17 17 17 0 n/a 00:07:37.252 00:07:37.252 Elapsed time = 0.022 seconds 00:07:37.252 00:07:37.252 real 0m0.481s 00:07:37.253 user 0m0.236s 00:07:37.253 sys 0m0.240s 00:07:37.253 ************************************ 00:07:37.253 END TEST unittest_nvmf 00:07:37.253 22:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.253 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.253 ************************************ 00:07:37.253 22:15:36 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.253 22:15:36 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.253 22:15:36 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:37.253 22:15:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.253 22:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.253 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.253 ************************************ 00:07:37.253 START TEST unittest_nvmf_rdma 00:07:37.253 ************************************ 00:07:37.253 22:15:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:37.253 00:07:37.253 00:07:37.253 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.253 http://cunit.sourceforge.net/ 00:07:37.253 00:07:37.253 00:07:37.253 Suite: nvmf 00:07:37.253 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-14 22:15:36.307131] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:37.253 [2024-07-14 22:15:36.307546] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:37.253 [2024-07-14 22:15:36.307625] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:37.253 passed 00:07:37.253 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:37.253 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:37.253 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:37.253 Test: test_nvmf_rdma_opts_init ...passed 00:07:37.253 Test: test_nvmf_rdma_request_free_data ...passed 00:07:37.253 Test: test_nvmf_rdma_update_ibv_state ...passed 00:07:37.253 Test: test_nvmf_rdma_resources_create ...[2024-07-14 22:15:36.309395] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:37.253 [2024-07-14 22:15:36.309470] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:37.253 passed 00:07:37.253 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:37.253 Test: test_nvmf_rdma_resize_cq ...[2024-07-14 22:15:36.310891] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:37.253 Using CQ of insufficient size may lead to CQ overrun 00:07:37.253 passed 00:07:37.253 00:07:37.253 [2024-07-14 22:15:36.310998] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:37.253 [2024-07-14 22:15:36.311054] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:37.253 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.253 suites 1 1 n/a 0 0 00:07:37.253 tests 10 10 10 0 0 00:07:37.253 asserts 584 584 584 0 n/a 00:07:37.253 00:07:37.253 Elapsed time = 0.004 seconds 00:07:37.511 00:07:37.511 real 0m0.043s 00:07:37.511 user 0m0.021s 00:07:37.511 sys 0m0.022s 00:07:37.511 22:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.511 ************************************ 00:07:37.511 END TEST unittest_nvmf_rdma 00:07:37.511 ************************************ 00:07:37.511 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.511 22:15:36 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.511 22:15:36 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:07:37.511 22:15:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.511 22:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.511 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.511 ************************************ 00:07:37.511 START TEST unittest_scsi 00:07:37.511 ************************************ 00:07:37.511 22:15:36 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:07:37.511 22:15:36 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:37.511 00:07:37.511 00:07:37.511 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.511 http://cunit.sourceforge.net/ 00:07:37.511 00:07:37.511 00:07:37.511 Suite: dev_suite 00:07:37.511 Test: dev_destruct_null_dev ...passed 00:07:37.511 Test: dev_destruct_zero_luns ...passed 00:07:37.511 Test: dev_destruct_null_lun ...passed 00:07:37.511 Test: dev_destruct_success ...passed 00:07:37.511 Test: dev_construct_num_luns_zero ...[2024-07-14 22:15:36.402574] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:37.511 passed 00:07:37.511 Test: dev_construct_no_lun_zero ...passed 00:07:37.511 Test: dev_construct_null_lun ...[2024-07-14 22:15:36.402962] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:37.511 passed 00:07:37.511 Test: dev_construct_name_too_long ...[2024-07-14 22:15:36.403012] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:37.511 [2024-07-14 22:15:36.403050] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:37.511 passed 00:07:37.511 Test: dev_construct_success ...passed 00:07:37.511 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:37.511 Test: dev_queue_mgmt_task_success ...passed 00:07:37.511 Test: dev_queue_task_success ...passed 00:07:37.511 Test: dev_stop_success ...passed 00:07:37.511 Test: dev_add_port_max_ports ...passed 00:07:37.511 Test: dev_add_port_construct_failure1 ...[2024-07-14 22:15:36.403357] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:37.511 [2024-07-14 22:15:36.403458] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:37.511 passed 00:07:37.511 Test: dev_add_port_construct_failure2 ...passed 00:07:37.511 Test: dev_add_port_success1 ...passed 00:07:37.511 Test: dev_add_port_success2 ...passed 00:07:37.511 Test: dev_add_port_success3 ...passed 00:07:37.511 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:37.511 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:37.511 Test: dev_find_port_by_id_success ...[2024-07-14 22:15:36.403541] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:37.511 passed 00:07:37.511 Test: dev_add_lun_bdev_not_found ...passed 00:07:37.511 Test: dev_add_lun_no_free_lun_id ...[2024-07-14 22:15:36.403975] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:37.511 passed 00:07:37.511 Test: dev_add_lun_success1 ...passed 00:07:37.511 Test: dev_add_lun_success2 ...passed 00:07:37.511 Test: dev_check_pending_tasks ...passed 00:07:37.511 Test: dev_iterate_luns ...passed 00:07:37.511 Test: dev_find_free_lun ...passed 00:07:37.511 00:07:37.511 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.511 suites 1 1 n/a 0 0 00:07:37.511 tests 29 29 29 0 0 00:07:37.511 asserts 97 97 97 0 n/a 00:07:37.511 00:07:37.511 Elapsed time = 0.002 seconds 00:07:37.512 22:15:36 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:37.512 00:07:37.512 00:07:37.512 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.512 http://cunit.sourceforge.net/ 00:07:37.512 00:07:37.512 00:07:37.512 Suite: lun_suite 00:07:37.512 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-14 22:15:36.439405] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:37.512 passed 00:07:37.512 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-14 22:15:36.439855] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:37.512 passed 00:07:37.512 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:37.512 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:37.512 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:37.512 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-14 22:15:36.440021] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:37.512 passed 00:07:37.512 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:37.512 Test: lun_append_task_null_lun_not_supported ...passed 00:07:37.512 Test: lun_execute_scsi_task_pending ...passed 00:07:37.512 Test: lun_execute_scsi_task_complete ...passed 00:07:37.512 Test: lun_execute_scsi_task_resize ...passed 00:07:37.512 Test: lun_destruct_success ...passed 00:07:37.512 Test: lun_construct_null_ctx ...[2024-07-14 22:15:36.440215] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:37.512 passed 00:07:37.512 Test: lun_construct_success ...passed 00:07:37.512 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:07:37.512 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:37.512 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:37.512 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:37.512 00:07:37.512 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.512 suites 1 1 n/a 0 0 00:07:37.512 tests 18 18 18 0 0 00:07:37.512 asserts 153 153 153 0 n/a 00:07:37.512 00:07:37.512 Elapsed time = 0.001 seconds 00:07:37.512 22:15:36 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:37.512 00:07:37.512 00:07:37.512 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.512 http://cunit.sourceforge.net/ 00:07:37.512 00:07:37.512 00:07:37.512 Suite: scsi_suite 00:07:37.512 Test: scsi_init ...passed 00:07:37.512 00:07:37.512 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.512 suites 1 1 n/a 0 0 00:07:37.512 tests 1 1 1 0 0 00:07:37.512 asserts 1 1 1 0 n/a 00:07:37.512 00:07:37.512 Elapsed time = 0.000 seconds 00:07:37.512 22:15:36 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:37.512 00:07:37.512 00:07:37.512 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.512 http://cunit.sourceforge.net/ 00:07:37.512 00:07:37.512 00:07:37.512 Suite: translation_suite 00:07:37.512 Test: mode_select_6_test ...passed 00:07:37.512 Test: mode_select_6_test2 ...passed 00:07:37.512 Test: mode_sense_6_test ...passed 00:07:37.512 Test: mode_sense_10_test ...passed 00:07:37.512 Test: inquiry_evpd_test ...passed 00:07:37.512 Test: inquiry_standard_test ...passed 00:07:37.512 Test: inquiry_overflow_test ...passed 00:07:37.512 Test: task_complete_test ...passed 00:07:37.512 Test: lba_range_test ...passed 00:07:37.512 Test: xfer_len_test ...[2024-07-14 22:15:36.510259] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:37.512 passed 00:07:37.512 Test: xfer_test ...passed 00:07:37.512 Test: scsi_name_padding_test ...passed 00:07:37.512 Test: get_dif_ctx_test ...passed 00:07:37.512 Test: unmap_split_test ...passed 00:07:37.512 00:07:37.512 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.512 suites 1 1 n/a 0 0 00:07:37.512 tests 14 14 14 0 0 00:07:37.512 asserts 1200 1200 1200 0 n/a 00:07:37.512 00:07:37.512 Elapsed time = 0.004 seconds 00:07:37.512 22:15:36 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:37.512 00:07:37.512 00:07:37.512 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.512 http://cunit.sourceforge.net/ 00:07:37.512 00:07:37.512 00:07:37.512 Suite: reservation_suite 00:07:37.512 Test: test_reservation_register ...[2024-07-14 22:15:36.539844] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:37.512 passed 00:07:37.512 Test: test_reservation_reserve ...[2024-07-14 22:15:36.540622] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:37.512 [2024-07-14 22:15:36.540741] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:37.512 [2024-07-14 22:15:36.540929] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:37.512 passed 00:07:37.512 Test: test_reservation_preempt_non_all_regs ...[2024-07-14 22:15:36.541037] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:37.512 [2024-07-14 22:15:36.541136] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:37.512 passed 00:07:37.512 Test: test_reservation_preempt_all_regs ...[2024-07-14 22:15:36.541318] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:37.512 passed 00:07:37.512 Test: test_reservation_cmds_conflict ...[2024-07-14 22:15:36.541512] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:37.512 [2024-07-14 22:15:36.541593] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:37.512 [2024-07-14 22:15:36.541650] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:37.512 [2024-07-14 22:15:36.541686] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:37.512 passed 00:07:37.512 Test: test_scsi2_reserve_release ...[2024-07-14 22:15:36.541736] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:37.512 [2024-07-14 22:15:36.541777] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:37.512 passed 00:07:37.512 Test: test_pr_with_scsi2_reserve_release ...[2024-07-14 22:15:36.541923] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:37.512 passed 00:07:37.512 00:07:37.512 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.512 suites 1 1 n/a 0 0 00:07:37.512 tests 7 7 7 0 0 00:07:37.512 asserts 257 257 257 0 n/a 00:07:37.512 00:07:37.512 Elapsed time = 0.002 seconds 00:07:37.512 00:07:37.512 real 0m0.171s 00:07:37.512 user 0m0.097s 00:07:37.512 sys 0m0.076s 00:07:37.512 22:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.512 ************************************ 00:07:37.512 END TEST unittest_scsi 00:07:37.512 ************************************ 00:07:37.512 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.771 22:15:36 -- unit/unittest.sh@276 -- # uname -s 00:07:37.771 22:15:36 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:07:37.771 22:15:36 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:07:37.771 22:15:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.771 22:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.771 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.771 ************************************ 00:07:37.771 START TEST unittest_sock 00:07:37.771 ************************************ 00:07:37.771 22:15:36 -- common/autotest_common.sh@1104 -- # unittest_sock 00:07:37.771 22:15:36 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:37.771 00:07:37.771 00:07:37.771 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.771 http://cunit.sourceforge.net/ 00:07:37.771 00:07:37.771 00:07:37.771 Suite: sock 00:07:37.771 Test: posix_sock ...passed 00:07:37.771 Test: ut_sock ...passed 00:07:37.771 Test: posix_sock_group ...passed 00:07:37.771 Test: ut_sock_group ...passed 00:07:37.771 Test: posix_sock_group_fairness ...passed 00:07:37.771 Test: _posix_sock_close ...passed 00:07:37.771 Test: sock_get_default_opts ...passed 00:07:37.771 Test: ut_sock_impl_get_set_opts ...passed 00:07:37.771 Test: posix_sock_impl_get_set_opts ...passed 00:07:37.771 Test: ut_sock_map ...passed 00:07:37.771 Test: override_impl_opts ...passed 00:07:37.771 Test: ut_sock_group_get_ctx ...passed 00:07:37.771 00:07:37.771 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.771 suites 1 1 n/a 0 0 00:07:37.771 tests 12 12 12 0 0 00:07:37.771 asserts 349 349 349 0 n/a 00:07:37.771 00:07:37.771 Elapsed time = 0.007 seconds 00:07:37.771 22:15:36 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:37.771 00:07:37.771 00:07:37.771 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.771 http://cunit.sourceforge.net/ 00:07:37.771 00:07:37.771 00:07:37.771 Suite: posix 00:07:37.771 Test: flush ...passed 00:07:37.771 00:07:37.771 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.771 suites 1 1 n/a 0 0 00:07:37.771 tests 1 1 1 0 0 00:07:37.771 asserts 28 28 28 0 n/a 00:07:37.771 00:07:37.771 Elapsed time = 0.000 seconds 00:07:37.771 22:15:36 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.771 00:07:37.771 real 0m0.097s 00:07:37.771 user 0m0.046s 00:07:37.771 sys 0m0.027s 00:07:37.771 22:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.771 ************************************ 00:07:37.771 END TEST unittest_sock 00:07:37.771 ************************************ 00:07:37.771 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.771 22:15:36 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:37.771 22:15:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.771 22:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.771 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.771 ************************************ 00:07:37.771 START TEST unittest_thread 00:07:37.771 ************************************ 00:07:37.771 22:15:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:37.771 00:07:37.771 00:07:37.771 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.771 http://cunit.sourceforge.net/ 00:07:37.771 00:07:37.771 00:07:37.771 Suite: io_channel 00:07:37.771 Test: thread_alloc ...passed 00:07:37.771 Test: thread_send_msg ...passed 00:07:37.771 Test: thread_poller ...passed 00:07:37.771 Test: poller_pause ...passed 00:07:37.771 Test: thread_for_each ...passed 00:07:37.771 Test: for_each_channel_remove ...passed 00:07:37.771 Test: for_each_channel_unreg ...[2024-07-14 22:15:36.787692] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7fff48d7b260 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:37.771 passed 00:07:37.771 Test: thread_name ...passed 00:07:37.771 Test: channel ...[2024-07-14 22:15:36.791928] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x555c6c6200e0 00:07:37.771 passed 00:07:37.771 Test: channel_destroy_races ...passed 00:07:37.771 Test: thread_exit_test ...[2024-07-14 22:15:36.797032] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:37.771 passed 00:07:37.771 Test: thread_update_stats_test ...passed 00:07:37.771 Test: nested_channel ...passed 00:07:37.771 Test: device_unregister_and_thread_exit_race ...passed 00:07:37.771 Test: cache_closest_timed_poller ...passed 00:07:37.771 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:37.771 Test: io_device_lookup ...passed 00:07:37.771 Test: spdk_spin ...[2024-07-14 22:15:36.808060] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:37.771 [2024-07-14 22:15:36.808112] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff48d7b250 00:07:37.771 [2024-07-14 22:15:36.808208] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:37.771 [2024-07-14 22:15:36.809986] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:37.771 [2024-07-14 22:15:36.810074] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff48d7b250 00:07:37.771 [2024-07-14 22:15:36.810108] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:37.771 [2024-07-14 22:15:36.810137] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff48d7b250 00:07:37.771 [2024-07-14 22:15:36.810159] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:37.771 [2024-07-14 22:15:36.810191] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff48d7b250 00:07:37.771 [2024-07-14 22:15:36.810215] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:37.771 [2024-07-14 22:15:36.810254] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff48d7b250 00:07:37.771 passed 00:07:37.771 Test: for_each_channel_and_thread_exit_race ...passed 00:07:37.771 Test: for_each_thread_and_thread_exit_race ...passed 00:07:37.771 00:07:37.771 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.771 suites 1 1 n/a 0 0 00:07:37.771 tests 20 20 20 0 0 00:07:37.771 asserts 409 409 409 0 n/a 00:07:37.771 00:07:37.771 Elapsed time = 0.050 seconds 00:07:38.030 00:07:38.030 real 0m0.091s 00:07:38.030 user 0m0.063s 00:07:38.030 sys 0m0.028s 00:07:38.030 22:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.030 ************************************ 00:07:38.030 END TEST unittest_thread 00:07:38.030 ************************************ 00:07:38.030 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:38.030 22:15:36 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:38.030 22:15:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.030 22:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.030 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:38.030 ************************************ 00:07:38.030 START TEST unittest_iobuf 00:07:38.030 ************************************ 00:07:38.030 22:15:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:38.030 00:07:38.030 00:07:38.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.030 http://cunit.sourceforge.net/ 00:07:38.030 00:07:38.030 00:07:38.030 Suite: io_channel 00:07:38.030 Test: iobuf ...passed 00:07:38.030 Test: iobuf_cache ...[2024-07-14 22:15:36.912063] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:38.030 [2024-07-14 22:15:36.912451] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:38.030 [2024-07-14 22:15:36.912618] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:38.030 [2024-07-14 22:15:36.912667] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:38.030 [2024-07-14 22:15:36.912738] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:38.030 [2024-07-14 22:15:36.912792] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:38.030 passed 00:07:38.030 00:07:38.030 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.030 suites 1 1 n/a 0 0 00:07:38.030 tests 2 2 2 0 0 00:07:38.030 asserts 107 107 107 0 n/a 00:07:38.030 00:07:38.030 Elapsed time = 0.006 seconds 00:07:38.030 00:07:38.030 real 0m0.038s 00:07:38.030 user 0m0.022s 00:07:38.030 sys 0m0.017s 00:07:38.030 22:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.030 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:38.030 ************************************ 00:07:38.030 END TEST unittest_iobuf 00:07:38.030 ************************************ 00:07:38.030 22:15:36 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:07:38.030 22:15:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.030 22:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.030 22:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:38.030 ************************************ 00:07:38.030 START TEST unittest_util 00:07:38.030 ************************************ 00:07:38.030 22:15:36 -- common/autotest_common.sh@1104 -- # unittest_util 00:07:38.030 22:15:36 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:38.030 00:07:38.030 00:07:38.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.030 http://cunit.sourceforge.net/ 00:07:38.030 00:07:38.030 00:07:38.030 Suite: base64 00:07:38.030 Test: test_base64_get_encoded_strlen ...passed 00:07:38.030 Test: test_base64_get_decoded_len ...passed 00:07:38.030 Test: test_base64_encode ...passed 00:07:38.030 Test: test_base64_decode ...passed 00:07:38.030 Test: test_base64_urlsafe_encode ...passed 00:07:38.030 Test: test_base64_urlsafe_decode ...passed 00:07:38.030 00:07:38.030 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.030 suites 1 1 n/a 0 0 00:07:38.030 tests 6 6 6 0 0 00:07:38.030 asserts 112 112 112 0 n/a 00:07:38.030 00:07:38.030 Elapsed time = 0.000 seconds 00:07:38.030 22:15:37 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:38.030 00:07:38.030 00:07:38.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.030 http://cunit.sourceforge.net/ 00:07:38.030 00:07:38.030 00:07:38.030 Suite: bit_array 00:07:38.030 Test: test_1bit ...passed 00:07:38.030 Test: test_64bit ...passed 00:07:38.030 Test: test_find ...passed 00:07:38.030 Test: test_resize ...passed 00:07:38.030 Test: test_errors ...passed 00:07:38.030 Test: test_count ...passed 00:07:38.030 Test: test_mask_store_load ...passed 00:07:38.030 Test: test_mask_clear ...passed 00:07:38.030 00:07:38.030 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.030 suites 1 1 n/a 0 0 00:07:38.030 tests 8 8 8 0 0 00:07:38.030 asserts 5075 5075 5075 0 n/a 00:07:38.030 00:07:38.030 Elapsed time = 0.001 seconds 00:07:38.030 22:15:37 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:38.030 00:07:38.030 00:07:38.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.030 http://cunit.sourceforge.net/ 00:07:38.030 00:07:38.030 00:07:38.030 Suite: cpuset 00:07:38.030 Test: test_cpuset ...passed 00:07:38.030 Test: test_cpuset_parse ...[2024-07-14 22:15:37.049324] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:38.030 [2024-07-14 22:15:37.049748] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:38.030 [2024-07-14 22:15:37.049858] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:38.031 [2024-07-14 22:15:37.049950] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:38.031 [2024-07-14 22:15:37.049998] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:38.031 [2024-07-14 22:15:37.050048] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:38.031 [2024-07-14 22:15:37.050089] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:38.031 [2024-07-14 22:15:37.050151] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:38.031 passed 00:07:38.031 Test: test_cpuset_fmt ...passed 00:07:38.031 00:07:38.031 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.031 suites 1 1 n/a 0 0 00:07:38.031 tests 3 3 3 0 0 00:07:38.031 asserts 65 65 65 0 n/a 00:07:38.031 00:07:38.031 Elapsed time = 0.003 seconds 00:07:38.031 22:15:37 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:38.031 00:07:38.031 00:07:38.031 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.031 http://cunit.sourceforge.net/ 00:07:38.031 00:07:38.031 00:07:38.031 Suite: crc16 00:07:38.031 Test: test_crc16_t10dif ...passed 00:07:38.031 Test: test_crc16_t10dif_seed ...passed 00:07:38.031 Test: test_crc16_t10dif_copy ...passed 00:07:38.031 00:07:38.031 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.031 suites 1 1 n/a 0 0 00:07:38.031 tests 3 3 3 0 0 00:07:38.031 asserts 5 5 5 0 n/a 00:07:38.031 00:07:38.031 Elapsed time = 0.000 seconds 00:07:38.291 22:15:37 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:38.291 00:07:38.291 00:07:38.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.291 http://cunit.sourceforge.net/ 00:07:38.291 00:07:38.291 00:07:38.291 Suite: crc32_ieee 00:07:38.291 Test: test_crc32_ieee ...passed 00:07:38.291 00:07:38.291 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.291 suites 1 1 n/a 0 0 00:07:38.291 tests 1 1 1 0 0 00:07:38.291 asserts 1 1 1 0 n/a 00:07:38.291 00:07:38.291 Elapsed time = 0.000 seconds 00:07:38.291 22:15:37 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:38.291 00:07:38.291 00:07:38.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.291 http://cunit.sourceforge.net/ 00:07:38.291 00:07:38.291 00:07:38.291 Suite: crc32c 00:07:38.291 Test: test_crc32c ...passed 00:07:38.291 Test: test_crc32c_nvme ...passed 00:07:38.291 00:07:38.291 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.291 suites 1 1 n/a 0 0 00:07:38.291 tests 2 2 2 0 0 00:07:38.291 asserts 16 16 16 0 n/a 00:07:38.291 00:07:38.291 Elapsed time = 0.001 seconds 00:07:38.291 22:15:37 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:38.291 00:07:38.291 00:07:38.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.291 http://cunit.sourceforge.net/ 00:07:38.291 00:07:38.291 00:07:38.291 Suite: crc64 00:07:38.291 Test: test_crc64_nvme ...passed 00:07:38.291 00:07:38.291 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.291 suites 1 1 n/a 0 0 00:07:38.291 tests 1 1 1 0 0 00:07:38.291 asserts 4 4 4 0 n/a 00:07:38.291 00:07:38.291 Elapsed time = 0.001 seconds 00:07:38.291 22:15:37 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:38.291 00:07:38.291 00:07:38.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.291 http://cunit.sourceforge.net/ 00:07:38.291 00:07:38.291 00:07:38.291 Suite: string 00:07:38.291 Test: test_parse_ip_addr ...passed 00:07:38.291 Test: test_str_chomp ...passed 00:07:38.291 Test: test_parse_capacity ...passed 00:07:38.291 Test: test_sprintf_append_realloc ...passed 00:07:38.291 Test: test_strtol ...passed 00:07:38.291 Test: test_strtoll ...passed 00:07:38.291 Test: test_strarray ...passed 00:07:38.291 Test: test_strcpy_replace ...passed 00:07:38.291 00:07:38.291 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.291 suites 1 1 n/a 0 0 00:07:38.291 tests 8 8 8 0 0 00:07:38.291 asserts 161 161 161 0 n/a 00:07:38.291 00:07:38.291 Elapsed time = 0.001 seconds 00:07:38.291 22:15:37 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:38.291 00:07:38.291 00:07:38.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.291 http://cunit.sourceforge.net/ 00:07:38.291 00:07:38.291 00:07:38.291 Suite: dif 00:07:38.291 Test: dif_generate_and_verify_test ...[2024-07-14 22:15:37.222938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:38.291 [2024-07-14 22:15:37.223664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:38.291 [2024-07-14 22:15:37.224143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:38.291 [2024-07-14 22:15:37.224609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:38.291 [2024-07-14 22:15:37.224904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:38.291 [2024-07-14 22:15:37.225210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:38.291 passed 00:07:38.291 Test: dif_disable_check_test ...[2024-07-14 22:15:37.226249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:38.291 [2024-07-14 22:15:37.226624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:38.291 [2024-07-14 22:15:37.226922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:38.291 passed 00:07:38.291 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-14 22:15:37.227998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:38.291 [2024-07-14 22:15:37.228341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:38.291 [2024-07-14 22:15:37.228689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:38.291 [2024-07-14 22:15:37.229072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:38.291 [2024-07-14 22:15:37.229418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:38.291 [2024-07-14 22:15:37.229739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:38.291 [2024-07-14 22:15:37.230063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:38.291 [2024-07-14 22:15:37.230376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:38.291 [2024-07-14 22:15:37.230707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:38.291 [2024-07-14 22:15:37.231047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:38.291 [2024-07-14 22:15:37.231384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:38.291 passed 00:07:38.291 Test: dif_apptag_mask_test ...[2024-07-14 22:15:37.231733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:38.291 [2024-07-14 22:15:37.232123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:38.291 passed 00:07:38.291 Test: dif_sec_512_md_0_error_test ...[2024-07-14 22:15:37.232356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:38.291 passed 00:07:38.291 Test: dif_sec_4096_md_0_error_test ...passed 00:07:38.291 Test: dif_sec_4100_md_128_error_test ...[2024-07-14 22:15:37.232398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:38.291 [2024-07-14 22:15:37.232432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:38.291 passed 00:07:38.291 Test: dif_guard_seed_test ...passed 00:07:38.291 Test: dif_guard_value_test ...[2024-07-14 22:15:37.232524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:38.291 [2024-07-14 22:15:37.232559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:38.291 passed 00:07:38.291 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:38.291 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:38.291 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-14 22:15:37.276986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=7d4c, Actual=fd4c 00:07:38.291 [2024-07-14 22:15:37.279435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=7e21, Actual=fe21 00:07:38.292 [2024-07-14 22:15:37.281909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.284377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.286859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.292 [2024-07-14 22:15:37.289327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.292 [2024-07-14 22:15:37.291800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=79ee 00:07:38.292 [2024-07-14 22:15:37.293863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe21, Actual=d0cd 00:07:38.292 [2024-07-14 22:15:37.295954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.292 [2024-07-14 22:15:37.298394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=3857c660, Actual=38574660 00:07:38.292 [2024-07-14 22:15:37.300874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.303318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.305794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.292 [2024-07-14 22:15:37.308258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.292 [2024-07-14 22:15:37.310718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=bd819a82 00:07:38.292 [2024-07-14 22:15:37.312804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38574660, Actual=87d2b31c 00:07:38.292 [2024-07-14 22:15:37.314883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.292 [2024-07-14 22:15:37.317359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:07:38.292 [2024-07-14 22:15:37.319823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.322313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.324809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=80000000005f 00:07:38.292 [2024-07-14 22:15:37.327259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=80000000005f 00:07:38.292 [2024-07-14 22:15:37.329748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.292 [2024-07-14 22:15:37.331831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4837a266, Actual=bdfe3fa86aea573b 00:07:38.292 passed 00:07:38.292 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-14 22:15:37.333042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:07:38.292 [2024-07-14 22:15:37.333366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:07:38.292 [2024-07-14 22:15:37.333670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.333982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.334315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.334618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.334928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79ee 00:07:38.292 [2024-07-14 22:15:37.335117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d0cd 00:07:38.292 [2024-07-14 22:15:37.335319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.292 [2024-07-14 22:15:37.335624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:07:38.292 [2024-07-14 22:15:37.335966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.336281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.336603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.336907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.337218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bd819a82 00:07:38.292 [2024-07-14 22:15:37.337416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87d2b31c 00:07:38.292 [2024-07-14 22:15:37.337630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.292 [2024-07-14 22:15:37.337934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:07:38.292 [2024-07-14 22:15:37.338246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.338562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.338885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.292 [2024-07-14 22:15:37.339188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.292 [2024-07-14 22:15:37.339511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.292 [2024-07-14 22:15:37.339718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=bdfe3fa86aea573b 00:07:38.292 passed 00:07:38.292 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-14 22:15:37.339969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:07:38.292 [2024-07-14 22:15:37.340299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:07:38.292 [2024-07-14 22:15:37.340614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.340934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.341256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.341569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.341871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79ee 00:07:38.292 [2024-07-14 22:15:37.342066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d0cd 00:07:38.292 [2024-07-14 22:15:37.342263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.292 [2024-07-14 22:15:37.342571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:07:38.292 [2024-07-14 22:15:37.342874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.343176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.343480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.343802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.344103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bd819a82 00:07:38.292 [2024-07-14 22:15:37.344300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87d2b31c 00:07:38.292 [2024-07-14 22:15:37.344534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.292 [2024-07-14 22:15:37.344839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:07:38.292 [2024-07-14 22:15:37.345149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.345463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.345780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.292 [2024-07-14 22:15:37.346085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.292 [2024-07-14 22:15:37.346419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.292 [2024-07-14 22:15:37.346610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=bdfe3fa86aea573b 00:07:38.292 passed 00:07:38.292 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-14 22:15:37.346871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:07:38.292 [2024-07-14 22:15:37.347196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:07:38.292 [2024-07-14 22:15:37.347510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.347829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.292 [2024-07-14 22:15:37.348168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.348490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.292 [2024-07-14 22:15:37.348805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79ee 00:07:38.292 [2024-07-14 22:15:37.349000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d0cd 00:07:38.292 [2024-07-14 22:15:37.349204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.292 [2024-07-14 22:15:37.349507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:07:38.293 [2024-07-14 22:15:37.349837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.350150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.350463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.553 [2024-07-14 22:15:37.350778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.553 [2024-07-14 22:15:37.351087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bd819a82 00:07:38.553 [2024-07-14 22:15:37.351285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87d2b31c 00:07:38.553 [2024-07-14 22:15:37.351493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.553 [2024-07-14 22:15:37.351821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:07:38.553 [2024-07-14 22:15:37.352126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.352470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.352785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.553 [2024-07-14 22:15:37.353099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.553 [2024-07-14 22:15:37.353429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.553 [2024-07-14 22:15:37.353631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=bdfe3fa86aea573b 00:07:38.553 passed 00:07:38.553 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-14 22:15:37.353887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:07:38.553 [2024-07-14 22:15:37.354193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:07:38.553 [2024-07-14 22:15:37.354503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.354815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.355149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.553 [2024-07-14 22:15:37.355452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.553 [2024-07-14 22:15:37.355786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79ee 00:07:38.553 [2024-07-14 22:15:37.355977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d0cd 00:07:38.553 passed 00:07:38.553 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-14 22:15:37.356232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.553 [2024-07-14 22:15:37.356560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:07:38.553 [2024-07-14 22:15:37.356894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.553 [2024-07-14 22:15:37.357197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.357516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.554 [2024-07-14 22:15:37.357819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.554 [2024-07-14 22:15:37.358138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bd819a82 00:07:38.554 [2024-07-14 22:15:37.358331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87d2b31c 00:07:38.554 [2024-07-14 22:15:37.358581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.554 [2024-07-14 22:15:37.358893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:07:38.554 [2024-07-14 22:15:37.359200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.359511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.359835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.554 [2024-07-14 22:15:37.360150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.554 [2024-07-14 22:15:37.360489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.554 [2024-07-14 22:15:37.360692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=bdfe3fa86aea573b 00:07:38.554 passed 00:07:38.554 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-14 22:15:37.360931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:07:38.554 [2024-07-14 22:15:37.361254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:07:38.554 [2024-07-14 22:15:37.361566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.361876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.362208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.554 [2024-07-14 22:15:37.362518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.554 [2024-07-14 22:15:37.362829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79ee 00:07:38.554 [2024-07-14 22:15:37.363017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d0cd 00:07:38.554 passed 00:07:38.554 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-14 22:15:37.363269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.554 [2024-07-14 22:15:37.363576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:07:38.554 [2024-07-14 22:15:37.363919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.364233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.364565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.554 [2024-07-14 22:15:37.364873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:07:38.554 [2024-07-14 22:15:37.365186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bd819a82 00:07:38.554 [2024-07-14 22:15:37.365377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87d2b31c 00:07:38.554 [2024-07-14 22:15:37.365628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.554 [2024-07-14 22:15:37.365937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:07:38.554 [2024-07-14 22:15:37.366251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.366555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.366868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.554 [2024-07-14 22:15:37.367175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058 00:07:38.554 [2024-07-14 22:15:37.367503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.554 [2024-07-14 22:15:37.367714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=bdfe3fa86aea573b 00:07:38.554 passed 00:07:38.554 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:38.554 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:38.554 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:38.554 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-14 22:15:37.413032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=7d4c, Actual=fd4c 00:07:38.554 [2024-07-14 22:15:37.414274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e4f, Actual=8e4f 00:07:38.554 [2024-07-14 22:15:37.415511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.416761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.418012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.554 [2024-07-14 22:15:37.419234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.554 [2024-07-14 22:15:37.420489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=79ee 00:07:38.554 [2024-07-14 22:15:37.421733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=2bb6 00:07:38.554 [2024-07-14 22:15:37.422979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.554 [2024-07-14 22:15:37.424243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=86126e21, Actual=8612ee21 00:07:38.554 [2024-07-14 22:15:37.425499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.426778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.428034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.554 [2024-07-14 22:15:37.429304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.554 [2024-07-14 22:15:37.430541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=bd819a82 00:07:38.554 [2024-07-14 22:15:37.431791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=78eb166 00:07:38.554 [2024-07-14 22:15:37.433065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.554 [2024-07-14 22:15:37.434346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=6304840d12678722, Actual=6304840d12670722 00:07:38.554 [2024-07-14 22:15:37.435589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.436852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.438099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=80000000005f 00:07:38.554 [2024-07-14 22:15:37.439330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=80000000005f 00:07:38.554 [2024-07-14 22:15:37.440581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.554 [2024-07-14 22:15:37.441877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=68a84e9d724a4ff7 00:07:38.554 passed 00:07:38.554 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-14 22:15:37.442517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7d4c, Actual=fd4c 00:07:38.554 [2024-07-14 22:15:37.442922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=99d5, Actual=19d5 00:07:38.554 [2024-07-14 22:15:37.443332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.443727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.444191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.554 [2024-07-14 22:15:37.444631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.554 [2024-07-14 22:15:37.445041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=79ee 00:07:38.554 [2024-07-14 22:15:37.445439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=bc2c 00:07:38.554 [2024-07-14 22:15:37.445838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.554 [2024-07-14 22:15:37.446248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a5a47a56, Actual=a5a4fa56 00:07:38.554 [2024-07-14 22:15:37.446661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.554 [2024-07-14 22:15:37.447062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.447464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.555 [2024-07-14 22:15:37.447895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.555 [2024-07-14 22:15:37.448307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bd819a82 00:07:38.555 [2024-07-14 22:15:37.448723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=2438a511 00:07:38.555 [2024-07-14 22:15:37.449159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.555 [2024-07-14 22:15:37.449564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=767b1f7e0bb408e7, Actual=767b1f7e0bb488e7 00:07:38.555 [2024-07-14 22:15:37.449981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.450383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.450791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000000059 00:07:38.555 [2024-07-14 22:15:37.451184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000000059 00:07:38.555 [2024-07-14 22:15:37.451606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.555 [2024-07-14 22:15:37.452018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=7dd7d5ee6b99c032 00:07:38.555 passed 00:07:38.555 Test: dix_sec_512_md_0_error ...[2024-07-14 22:15:37.452423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:38.555 passed 00:07:38.555 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:07:38.555 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:38.555 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:38.555 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:38.555 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:38.555 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:38.555 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:38.555 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:38.555 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:38.555 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-14 22:15:37.498181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=7d4c, Actual=fd4c 00:07:38.555 [2024-07-14 22:15:37.499422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e4f, Actual=8e4f 00:07:38.555 [2024-07-14 22:15:37.500680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.501941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.503183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.555 [2024-07-14 22:15:37.504425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.555 [2024-07-14 22:15:37.505679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=79ee 00:07:38.555 [2024-07-14 22:15:37.506916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=2bb6 00:07:38.555 [2024-07-14 22:15:37.508159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.555 [2024-07-14 22:15:37.509410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=86126e21, Actual=8612ee21 00:07:38.555 [2024-07-14 22:15:37.510663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.511928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.513175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.555 [2024-07-14 22:15:37.514418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=805f 00:07:38.555 [2024-07-14 22:15:37.515655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=bd819a82 00:07:38.555 [2024-07-14 22:15:37.516939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=78eb166 00:07:38.555 [2024-07-14 22:15:37.518191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.555 [2024-07-14 22:15:37.519413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=6304840d12678722, Actual=6304840d12670722 00:07:38.555 [2024-07-14 22:15:37.520669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.521902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.523134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=80000000005f 00:07:38.555 [2024-07-14 22:15:37.524373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=80000000005f 00:07:38.555 [2024-07-14 22:15:37.525656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.555 [2024-07-14 22:15:37.526873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=68a84e9d724a4ff7 00:07:38.555 passed 00:07:38.555 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-14 22:15:37.527426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7d4c, Actual=fd4c 00:07:38.555 [2024-07-14 22:15:37.527844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=99d5, Actual=19d5 00:07:38.555 [2024-07-14 22:15:37.528251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.528663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.529090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.555 [2024-07-14 22:15:37.529487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.555 [2024-07-14 22:15:37.529888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=79ee 00:07:38.555 [2024-07-14 22:15:37.530284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=bc2c 00:07:38.555 [2024-07-14 22:15:37.530700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab7d3ed, Actual=1ab753ed 00:07:38.555 [2024-07-14 22:15:37.531100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a5a47a56, Actual=a5a4fa56 00:07:38.555 [2024-07-14 22:15:37.531539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.531951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.532353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.555 [2024-07-14 22:15:37.532773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:07:38.555 [2024-07-14 22:15:37.533186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bd819a82 00:07:38.555 [2024-07-14 22:15:37.533604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=2438a511 00:07:38.555 [2024-07-14 22:15:37.534013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:07:38.555 [2024-07-14 22:15:37.534437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=767b1f7e0bb408e7, Actual=767b1f7e0bb488e7 00:07:38.555 [2024-07-14 22:15:37.534837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.535241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:07:38.555 [2024-07-14 22:15:37.535639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000000059 00:07:38.555 [2024-07-14 22:15:37.536068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000000059 00:07:38.555 [2024-07-14 22:15:37.536482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=407830657391baad 00:07:38.555 [2024-07-14 22:15:37.536891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=7dd7d5ee6b99c032 00:07:38.555 passed 00:07:38.555 Test: set_md_interleave_iovs_test ...passed 00:07:38.555 Test: set_md_interleave_iovs_split_test ...passed 00:07:38.555 Test: dif_generate_stream_pi_16_test ...passed 00:07:38.555 Test: dif_generate_stream_test ...passed 00:07:38.555 Test: set_md_interleave_iovs_alignment_test ...[2024-07-14 22:15:37.545782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:38.555 passed 00:07:38.555 Test: dif_generate_split_test ...passed 00:07:38.555 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:38.555 Test: dif_verify_split_test ...passed 00:07:38.555 Test: dif_verify_stream_multi_segments_test ...passed 00:07:38.555 Test: update_crc32c_pi_16_test ...passed 00:07:38.555 Test: update_crc32c_test ...passed 00:07:38.556 Test: dif_update_crc32c_split_test ...passed 00:07:38.556 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:38.556 Test: get_range_with_md_test ...passed 00:07:38.556 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:38.556 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:38.556 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:38.556 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:38.556 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:38.556 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:38.556 Test: dif_generate_and_verify_unmap_test ...passed 00:07:38.556 00:07:38.556 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.556 suites 1 1 n/a 0 0 00:07:38.556 tests 79 79 79 0 0 00:07:38.556 asserts 3584 3584 3584 0 n/a 00:07:38.556 00:07:38.556 Elapsed time = 0.349 seconds 00:07:38.556 22:15:37 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:38.815 00:07:38.815 00:07:38.815 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.815 http://cunit.sourceforge.net/ 00:07:38.815 00:07:38.815 00:07:38.815 Suite: iov 00:07:38.815 Test: test_single_iov ...passed 00:07:38.815 Test: test_simple_iov ...passed 00:07:38.815 Test: test_complex_iov ...passed 00:07:38.815 Test: test_iovs_to_buf ...passed 00:07:38.815 Test: test_buf_to_iovs ...passed 00:07:38.815 Test: test_memset ...passed 00:07:38.815 Test: test_iov_one ...passed 00:07:38.815 Test: test_iov_xfer ...passed 00:07:38.815 00:07:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.815 suites 1 1 n/a 0 0 00:07:38.815 tests 8 8 8 0 0 00:07:38.815 asserts 156 156 156 0 n/a 00:07:38.815 00:07:38.815 Elapsed time = 0.000 seconds 00:07:38.815 22:15:37 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:38.815 00:07:38.815 00:07:38.815 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.815 http://cunit.sourceforge.net/ 00:07:38.815 00:07:38.815 00:07:38.815 Suite: math 00:07:38.815 Test: test_serial_number_arithmetic ...passed 00:07:38.815 Suite: erase 00:07:38.815 Test: test_memset_s ...passed 00:07:38.815 00:07:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.815 suites 2 2 n/a 0 0 00:07:38.815 tests 2 2 2 0 0 00:07:38.815 asserts 18 18 18 0 n/a 00:07:38.815 00:07:38.815 Elapsed time = 0.000 seconds 00:07:38.815 22:15:37 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:38.815 00:07:38.815 00:07:38.815 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.815 http://cunit.sourceforge.net/ 00:07:38.815 00:07:38.815 00:07:38.815 Suite: pipe 00:07:38.815 Test: test_create_destroy ...passed 00:07:38.815 Test: test_write_get_buffer ...passed 00:07:38.815 Test: test_write_advance ...passed 00:07:38.815 Test: test_read_get_buffer ...passed 00:07:38.815 Test: test_read_advance ...passed 00:07:38.815 Test: test_data ...passed 00:07:38.815 00:07:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.815 suites 1 1 n/a 0 0 00:07:38.815 tests 6 6 6 0 0 00:07:38.815 asserts 250 250 250 0 n/a 00:07:38.815 00:07:38.815 Elapsed time = 0.000 seconds 00:07:38.815 22:15:37 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:38.815 00:07:38.815 00:07:38.815 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.815 http://cunit.sourceforge.net/ 00:07:38.815 00:07:38.815 00:07:38.815 Suite: xor 00:07:38.815 Test: test_xor_gen ...passed 00:07:38.815 00:07:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.815 suites 1 1 n/a 0 0 00:07:38.815 tests 1 1 1 0 0 00:07:38.815 asserts 17 17 17 0 n/a 00:07:38.815 00:07:38.815 Elapsed time = 0.007 seconds 00:07:38.815 ************************************ 00:07:38.815 END TEST unittest_util 00:07:38.815 ************************************ 00:07:38.815 00:07:38.815 real 0m0.758s 00:07:38.815 user 0m0.547s 00:07:38.815 sys 0m0.190s 00:07:38.815 22:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.815 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:38.815 22:15:37 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:38.815 22:15:37 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:38.815 22:15:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.815 22:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.815 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:38.815 ************************************ 00:07:38.815 START TEST unittest_vhost 00:07:38.815 ************************************ 00:07:38.815 22:15:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:38.815 00:07:38.815 00:07:38.815 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.815 http://cunit.sourceforge.net/ 00:07:38.815 00:07:38.815 00:07:38.815 Suite: vhost_suite 00:07:38.815 Test: desc_to_iov_test ...[2024-07-14 22:15:37.810771] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:38.815 passed 00:07:38.815 Test: create_controller_test ...[2024-07-14 22:15:37.814365] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:38.815 [2024-07-14 22:15:37.814595] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:38.815 [2024-07-14 22:15:37.814794] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:38.815 [2024-07-14 22:15:37.814978] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:38.815 [2024-07-14 22:15:37.815152] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:38.815 [2024-07-14 22:15:37.815367] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-14 22:15:37.816417] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:38.815 passed 00:07:38.815 Test: session_find_by_vid_test ...passed 00:07:38.815 Test: remove_controller_test ...[2024-07-14 22:15:37.818668] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:38.815 passed 00:07:38.815 Test: vq_avail_ring_get_test ...passed 00:07:38.815 Test: vq_packed_ring_test ...passed 00:07:38.815 Test: vhost_blk_construct_test ...passed 00:07:38.815 00:07:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.815 suites 1 1 n/a 0 0 00:07:38.815 tests 7 7 7 0 0 00:07:38.815 asserts 145 145 145 0 n/a 00:07:38.815 00:07:38.815 Elapsed time = 0.010 seconds 00:07:38.815 00:07:38.815 real 0m0.040s 00:07:38.815 user 0m0.020s 00:07:38.815 sys 0m0.018s 00:07:38.815 22:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.815 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:38.815 ************************************ 00:07:38.816 END TEST unittest_vhost 00:07:38.816 ************************************ 00:07:38.816 22:15:37 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:38.816 22:15:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.816 22:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.816 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.074 ************************************ 00:07:39.074 START TEST unittest_dma 00:07:39.074 ************************************ 00:07:39.074 22:15:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:39.074 00:07:39.074 00:07:39.074 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.074 http://cunit.sourceforge.net/ 00:07:39.074 00:07:39.074 00:07:39.074 Suite: dma_suite 00:07:39.074 Test: test_dma ...[2024-07-14 22:15:37.895965] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:39.074 passed 00:07:39.074 00:07:39.074 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.074 suites 1 1 n/a 0 0 00:07:39.074 tests 1 1 1 0 0 00:07:39.074 asserts 50 50 50 0 n/a 00:07:39.074 00:07:39.074 Elapsed time = 0.001 seconds 00:07:39.074 00:07:39.074 real 0m0.033s 00:07:39.074 user 0m0.018s 00:07:39.074 sys 0m0.013s 00:07:39.074 22:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.074 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.074 ************************************ 00:07:39.074 END TEST unittest_dma 00:07:39.074 ************************************ 00:07:39.074 22:15:37 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:07:39.074 22:15:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:39.074 22:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.074 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.074 ************************************ 00:07:39.074 START TEST unittest_init 00:07:39.074 ************************************ 00:07:39.074 22:15:37 -- common/autotest_common.sh@1104 -- # unittest_init 00:07:39.074 22:15:37 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:39.074 00:07:39.074 00:07:39.074 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.074 http://cunit.sourceforge.net/ 00:07:39.074 00:07:39.074 00:07:39.074 Suite: subsystem_suite 00:07:39.074 Test: subsystem_sort_test_depends_on_single ...passed 00:07:39.074 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:39.075 Test: subsystem_sort_test_missing_dependency ...[2024-07-14 22:15:37.982225] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:39.075 passed 00:07:39.075 00:07:39.075 [2024-07-14 22:15:37.982578] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:39.075 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.075 suites 1 1 n/a 0 0 00:07:39.075 tests 3 3 3 0 0 00:07:39.075 asserts 20 20 20 0 n/a 00:07:39.075 00:07:39.075 Elapsed time = 0.001 seconds 00:07:39.075 00:07:39.075 real 0m0.034s 00:07:39.075 user 0m0.025s 00:07:39.075 sys 0m0.010s 00:07:39.075 ************************************ 00:07:39.075 END TEST unittest_init 00:07:39.075 ************************************ 00:07:39.075 22:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.075 22:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.075 22:15:38 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:07:39.075 22:15:38 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:39.075 22:15:38 -- unit/unittest.sh@290 -- # hostname 00:07:39.075 22:15:38 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:39.334 geninfo: WARNING: invalid characters removed from testname! 00:08:05.874 22:16:04 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:11.134 22:16:09 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:13.055 22:16:12 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:16.335 22:16:14 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:18.865 22:16:17 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:21.398 22:16:19 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:23.297 22:16:22 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:25.824 22:16:24 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:25.824 22:16:24 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:26.094 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:26.094 Found 309 entries. 00:08:26.094 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:26.094 Writing .css and .png files. 00:08:26.094 Generating output. 00:08:26.094 Processing file include/linux/virtio_ring.h 00:08:26.667 Processing file include/spdk/mmio.h 00:08:26.667 Processing file include/spdk/nvme_spec.h 00:08:26.667 Processing file include/spdk/bdev_module.h 00:08:26.667 Processing file include/spdk/endian.h 00:08:26.667 Processing file include/spdk/util.h 00:08:26.667 Processing file include/spdk/base64.h 00:08:26.667 Processing file include/spdk/thread.h 00:08:26.667 Processing file include/spdk/histogram_data.h 00:08:26.667 Processing file include/spdk/trace.h 00:08:26.667 Processing file include/spdk/nvmf_transport.h 00:08:26.667 Processing file include/spdk/nvme.h 00:08:26.667 Processing file include/spdk_internal/utf.h 00:08:26.667 Processing file include/spdk_internal/nvme_tcp.h 00:08:26.667 Processing file include/spdk_internal/sock.h 00:08:26.667 Processing file include/spdk_internal/virtio.h 00:08:26.667 Processing file include/spdk_internal/sgl.h 00:08:26.667 Processing file include/spdk_internal/rdma.h 00:08:26.667 Processing file lib/accel/accel.c 00:08:26.667 Processing file lib/accel/accel_sw.c 00:08:26.667 Processing file lib/accel/accel_rpc.c 00:08:27.231 Processing file lib/bdev/bdev_rpc.c 00:08:27.231 Processing file lib/bdev/scsi_nvme.c 00:08:27.231 Processing file lib/bdev/bdev.c 00:08:27.231 Processing file lib/bdev/part.c 00:08:27.231 Processing file lib/bdev/bdev_zone.c 00:08:27.231 Processing file lib/blob/request.c 00:08:27.231 Processing file lib/blob/blob_bs_dev.c 00:08:27.231 Processing file lib/blob/zeroes.c 00:08:27.231 Processing file lib/blob/blobstore.h 00:08:27.231 Processing file lib/blob/blobstore.c 00:08:27.489 Processing file lib/blobfs/blobfs.c 00:08:27.489 Processing file lib/blobfs/tree.c 00:08:27.489 Processing file lib/conf/conf.c 00:08:27.489 Processing file lib/dma/dma.c 00:08:27.746 Processing file lib/env_dpdk/pci_vmd.c 00:08:27.746 Processing file lib/env_dpdk/env.c 00:08:27.746 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:27.746 Processing file lib/env_dpdk/pci_event.c 00:08:27.746 Processing file lib/env_dpdk/pci_dpdk.c 00:08:27.746 Processing file lib/env_dpdk/pci_ioat.c 00:08:27.746 Processing file lib/env_dpdk/memory.c 00:08:27.746 Processing file lib/env_dpdk/pci_virtio.c 00:08:27.746 Processing file lib/env_dpdk/threads.c 00:08:27.746 Processing file lib/env_dpdk/init.c 00:08:27.746 Processing file lib/env_dpdk/pci.c 00:08:27.746 Processing file lib/env_dpdk/sigbus_handler.c 00:08:27.746 Processing file lib/env_dpdk/pci_idxd.c 00:08:27.746 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:28.004 Processing file lib/event/reactor.c 00:08:28.004 Processing file lib/event/app_rpc.c 00:08:28.004 Processing file lib/event/log_rpc.c 00:08:28.004 Processing file lib/event/app.c 00:08:28.004 Processing file lib/event/scheduler_static.c 00:08:28.569 Processing file lib/ftl/ftl_l2p.c 00:08:28.569 Processing file lib/ftl/ftl_l2p_cache.c 00:08:28.569 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:28.569 Processing file lib/ftl/ftl_band_ops.c 00:08:28.569 Processing file lib/ftl/ftl_sb.c 00:08:28.569 Processing file lib/ftl/ftl_reloc.c 00:08:28.569 Processing file lib/ftl/ftl_nv_cache.c 00:08:28.569 Processing file lib/ftl/ftl_layout.c 00:08:28.569 Processing file lib/ftl/ftl_rq.c 00:08:28.569 Processing file lib/ftl/ftl_core.c 00:08:28.569 Processing file lib/ftl/ftl_debug.c 00:08:28.569 Processing file lib/ftl/ftl_band.h 00:08:28.569 Processing file lib/ftl/ftl_core.h 00:08:28.569 Processing file lib/ftl/ftl_init.c 00:08:28.569 Processing file lib/ftl/ftl_writer.h 00:08:28.569 Processing file lib/ftl/ftl_l2p_flat.c 00:08:28.569 Processing file lib/ftl/ftl_band.c 00:08:28.569 Processing file lib/ftl/ftl_writer.c 00:08:28.570 Processing file lib/ftl/ftl_io.c 00:08:28.570 Processing file lib/ftl/ftl_trace.c 00:08:28.570 Processing file lib/ftl/ftl_debug.h 00:08:28.570 Processing file lib/ftl/ftl_io.h 00:08:28.570 Processing file lib/ftl/ftl_nv_cache.h 00:08:28.570 Processing file lib/ftl/ftl_p2l.c 00:08:28.570 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:28.570 Processing file lib/ftl/base/ftl_base_dev.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:28.827 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:28.827 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:28.827 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:29.085 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:29.085 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:29.085 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:29.085 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:29.343 Processing file lib/ftl/utils/ftl_property.c 00:08:29.343 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:29.343 Processing file lib/ftl/utils/ftl_df.h 00:08:29.343 Processing file lib/ftl/utils/ftl_property.h 00:08:29.343 Processing file lib/ftl/utils/ftl_conf.c 00:08:29.343 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:29.343 Processing file lib/ftl/utils/ftl_mempool.c 00:08:29.343 Processing file lib/ftl/utils/ftl_md.c 00:08:29.343 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:29.343 Processing file lib/idxd/idxd_internal.h 00:08:29.343 Processing file lib/idxd/idxd_user.c 00:08:29.343 Processing file lib/idxd/idxd.c 00:08:29.601 Processing file lib/init/subsystem_rpc.c 00:08:29.601 Processing file lib/init/subsystem.c 00:08:29.601 Processing file lib/init/rpc.c 00:08:29.601 Processing file lib/init/json_config.c 00:08:29.601 Processing file lib/ioat/ioat_internal.h 00:08:29.601 Processing file lib/ioat/ioat.c 00:08:30.168 Processing file lib/iscsi/conn.c 00:08:30.168 Processing file lib/iscsi/iscsi.h 00:08:30.168 Processing file lib/iscsi/portal_grp.c 00:08:30.168 Processing file lib/iscsi/param.c 00:08:30.168 Processing file lib/iscsi/iscsi.c 00:08:30.168 Processing file lib/iscsi/task.h 00:08:30.168 Processing file lib/iscsi/iscsi_rpc.c 00:08:30.168 Processing file lib/iscsi/tgt_node.c 00:08:30.168 Processing file lib/iscsi/md5.c 00:08:30.168 Processing file lib/iscsi/iscsi_subsystem.c 00:08:30.168 Processing file lib/iscsi/init_grp.c 00:08:30.168 Processing file lib/iscsi/task.c 00:08:30.168 Processing file lib/json/json_parse.c 00:08:30.168 Processing file lib/json/json_util.c 00:08:30.168 Processing file lib/json/json_write.c 00:08:30.426 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:30.426 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:30.426 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:30.426 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:30.426 Processing file lib/log/log.c 00:08:30.426 Processing file lib/log/log_flags.c 00:08:30.426 Processing file lib/log/log_deprecated.c 00:08:30.685 Processing file lib/lvol/lvol.c 00:08:30.685 Processing file lib/nbd/nbd_rpc.c 00:08:30.685 Processing file lib/nbd/nbd.c 00:08:30.685 Processing file lib/notify/notify_rpc.c 00:08:30.685 Processing file lib/notify/notify.c 00:08:31.621 Processing file lib/nvme/nvme_ns_cmd.c 00:08:31.621 Processing file lib/nvme/nvme_poll_group.c 00:08:31.621 Processing file lib/nvme/nvme_fabric.c 00:08:31.621 Processing file lib/nvme/nvme_tcp.c 00:08:31.621 Processing file lib/nvme/nvme_pcie_internal.h 00:08:31.621 Processing file lib/nvme/nvme_ctrlr.c 00:08:31.621 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:31.621 Processing file lib/nvme/nvme_pcie_common.c 00:08:31.621 Processing file lib/nvme/nvme_io_msg.c 00:08:31.621 Processing file lib/nvme/nvme_internal.h 00:08:31.621 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:31.621 Processing file lib/nvme/nvme_zns.c 00:08:31.621 Processing file lib/nvme/nvme_rdma.c 00:08:31.621 Processing file lib/nvme/nvme_opal.c 00:08:31.621 Processing file lib/nvme/nvme_vfio_user.c 00:08:31.621 Processing file lib/nvme/nvme_pcie.c 00:08:31.621 Processing file lib/nvme/nvme_quirks.c 00:08:31.621 Processing file lib/nvme/nvme_qpair.c 00:08:31.621 Processing file lib/nvme/nvme_ns.c 00:08:31.621 Processing file lib/nvme/nvme_cuse.c 00:08:31.621 Processing file lib/nvme/nvme_transport.c 00:08:31.621 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:31.621 Processing file lib/nvme/nvme.c 00:08:31.621 Processing file lib/nvme/nvme_discovery.c 00:08:32.188 Processing file lib/nvmf/nvmf_rpc.c 00:08:32.188 Processing file lib/nvmf/rdma.c 00:08:32.188 Processing file lib/nvmf/tcp.c 00:08:32.188 Processing file lib/nvmf/ctrlr_discovery.c 00:08:32.188 Processing file lib/nvmf/ctrlr.c 00:08:32.188 Processing file lib/nvmf/nvmf_internal.h 00:08:32.188 Processing file lib/nvmf/transport.c 00:08:32.188 Processing file lib/nvmf/nvmf.c 00:08:32.188 Processing file lib/nvmf/subsystem.c 00:08:32.188 Processing file lib/nvmf/ctrlr_bdev.c 00:08:32.188 Processing file lib/rdma/rdma_verbs.c 00:08:32.188 Processing file lib/rdma/common.c 00:08:32.447 Processing file lib/rpc/rpc.c 00:08:32.447 Processing file lib/scsi/dev.c 00:08:32.447 Processing file lib/scsi/scsi.c 00:08:32.447 Processing file lib/scsi/port.c 00:08:32.447 Processing file lib/scsi/scsi_bdev.c 00:08:32.447 Processing file lib/scsi/task.c 00:08:32.447 Processing file lib/scsi/scsi_rpc.c 00:08:32.447 Processing file lib/scsi/scsi_pr.c 00:08:32.447 Processing file lib/scsi/lun.c 00:08:32.706 Processing file lib/sock/sock_rpc.c 00:08:32.706 Processing file lib/sock/sock.c 00:08:32.706 Processing file lib/thread/iobuf.c 00:08:32.706 Processing file lib/thread/thread.c 00:08:32.964 Processing file lib/trace/trace_rpc.c 00:08:32.964 Processing file lib/trace/trace_flags.c 00:08:32.964 Processing file lib/trace/trace.c 00:08:32.964 Processing file lib/trace_parser/trace.cpp 00:08:32.964 Processing file lib/ut/ut.c 00:08:33.222 Processing file lib/ut_mock/mock.c 00:08:33.480 Processing file lib/util/uuid.c 00:08:33.480 Processing file lib/util/pipe.c 00:08:33.480 Processing file lib/util/xor.c 00:08:33.480 Processing file lib/util/crc16.c 00:08:33.480 Processing file lib/util/fd.c 00:08:33.480 Processing file lib/util/crc64.c 00:08:33.480 Processing file lib/util/iov.c 00:08:33.480 Processing file lib/util/crc32c.c 00:08:33.480 Processing file lib/util/fd_group.c 00:08:33.480 Processing file lib/util/crc32_ieee.c 00:08:33.480 Processing file lib/util/crc32.c 00:08:33.480 Processing file lib/util/file.c 00:08:33.480 Processing file lib/util/cpuset.c 00:08:33.480 Processing file lib/util/bit_array.c 00:08:33.480 Processing file lib/util/hexlify.c 00:08:33.480 Processing file lib/util/dif.c 00:08:33.480 Processing file lib/util/math.c 00:08:33.480 Processing file lib/util/strerror_tls.c 00:08:33.480 Processing file lib/util/string.c 00:08:33.480 Processing file lib/util/zipf.c 00:08:33.480 Processing file lib/util/base64.c 00:08:33.738 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:33.738 Processing file lib/vfio_user/host/vfio_user.c 00:08:33.738 Processing file lib/vhost/vhost_internal.h 00:08:33.738 Processing file lib/vhost/vhost_scsi.c 00:08:33.738 Processing file lib/vhost/vhost_blk.c 00:08:33.738 Processing file lib/vhost/rte_vhost_user.c 00:08:33.738 Processing file lib/vhost/vhost_rpc.c 00:08:33.738 Processing file lib/vhost/vhost.c 00:08:33.996 Processing file lib/virtio/virtio_vhost_user.c 00:08:33.996 Processing file lib/virtio/virtio.c 00:08:33.996 Processing file lib/virtio/virtio_pci.c 00:08:33.996 Processing file lib/virtio/virtio_vfio_user.c 00:08:33.996 Processing file lib/vmd/led.c 00:08:33.996 Processing file lib/vmd/vmd.c 00:08:34.255 Processing file module/accel/dsa/accel_dsa.c 00:08:34.255 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:34.255 Processing file module/accel/error/accel_error_rpc.c 00:08:34.255 Processing file module/accel/error/accel_error.c 00:08:34.255 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:34.255 Processing file module/accel/iaa/accel_iaa.c 00:08:34.512 Processing file module/accel/ioat/accel_ioat.c 00:08:34.512 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:34.512 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:34.512 Processing file module/bdev/aio/bdev_aio.c 00:08:34.512 Processing file module/bdev/delay/vbdev_delay.c 00:08:34.512 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:34.769 Processing file module/bdev/error/vbdev_error.c 00:08:34.769 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:34.769 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:34.769 Processing file module/bdev/ftl/bdev_ftl.c 00:08:35.027 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:35.027 Processing file module/bdev/gpt/gpt.c 00:08:35.027 Processing file module/bdev/gpt/gpt.h 00:08:35.027 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:35.027 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:35.027 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:35.027 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:35.284 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:35.284 Processing file module/bdev/malloc/bdev_malloc.c 00:08:35.284 Processing file module/bdev/null/bdev_null.c 00:08:35.284 Processing file module/bdev/null/bdev_null_rpc.c 00:08:35.541 Processing file module/bdev/nvme/vbdev_opal.c 00:08:35.541 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:35.541 Processing file module/bdev/nvme/bdev_nvme.c 00:08:35.541 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:35.541 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:35.541 Processing file module/bdev/nvme/nvme_rpc.c 00:08:35.541 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:35.799 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:35.799 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:36.057 Processing file module/bdev/raid/raid5f.c 00:08:36.057 Processing file module/bdev/raid/concat.c 00:08:36.057 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:36.057 Processing file module/bdev/raid/raid1.c 00:08:36.057 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:36.057 Processing file module/bdev/raid/bdev_raid.h 00:08:36.057 Processing file module/bdev/raid/raid0.c 00:08:36.057 Processing file module/bdev/raid/bdev_raid.c 00:08:36.057 Processing file module/bdev/split/vbdev_split.c 00:08:36.057 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:36.315 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:36.315 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:36.315 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:36.315 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:36.315 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:36.315 Processing file module/blob/bdev/blob_bdev.c 00:08:36.572 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:36.572 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:36.572 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:36.572 Processing file module/event/subsystems/accel/accel.c 00:08:36.572 Processing file module/event/subsystems/bdev/bdev.c 00:08:36.830 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:36.830 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:36.830 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:36.830 Processing file module/event/subsystems/nbd/nbd.c 00:08:37.087 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:37.087 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:37.087 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:37.087 Processing file module/event/subsystems/scsi/scsi.c 00:08:37.087 Processing file module/event/subsystems/sock/sock.c 00:08:37.345 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:37.345 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:37.345 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:37.345 Processing file module/event/subsystems/vmd/vmd.c 00:08:37.602 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:37.602 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:37.602 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:37.602 Processing file module/sock/sock_kernel.h 00:08:37.860 Processing file module/sock/posix/posix.c 00:08:37.860 Writing directory view page. 00:08:37.860 Overall coverage rate: 00:08:37.860 lines......: 39.1% (39263 of 100392 lines) 00:08:37.860 functions..: 42.8% (3587 of 8384 functions) 00:08:37.860 00:08:37.860 00:08:37.860 ===================== 00:08:37.860 All unit tests passed 00:08:37.860 ===================== 00:08:37.860 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:37.860 22:16:36 -- unit/unittest.sh@302 -- # set +x 00:08:37.860 00:08:37.860 00:08:37.860 00:08:37.860 real 3m8.622s 00:08:37.860 user 2m41.998s 00:08:37.860 sys 0m14.914s 00:08:37.860 22:16:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.860 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:08:37.860 ************************************ 00:08:37.860 END TEST unittest 00:08:37.860 ************************************ 00:08:37.860 22:16:36 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:37.860 22:16:36 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:37.860 22:16:36 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:37.860 22:16:36 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:37.860 22:16:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.860 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:08:37.860 22:16:36 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:37.860 22:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.860 22:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.860 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:08:37.860 ************************************ 00:08:37.860 START TEST env 00:08:37.860 ************************************ 00:08:37.860 22:16:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:37.860 * Looking for test storage... 00:08:37.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:37.860 22:16:36 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:37.860 22:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.860 22:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.860 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:08:37.860 ************************************ 00:08:37.860 START TEST env_memory 00:08:37.860 ************************************ 00:08:37.860 22:16:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:37.860 00:08:37.861 00:08:37.861 CUnit - A unit testing framework for C - Version 2.1-3 00:08:37.861 http://cunit.sourceforge.net/ 00:08:37.861 00:08:37.861 00:08:37.861 Suite: memory 00:08:38.118 Test: alloc and free memory map ...[2024-07-14 22:16:36.980994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:38.118 passed 00:08:38.118 Test: mem map translation ...[2024-07-14 22:16:37.059001] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:38.118 [2024-07-14 22:16:37.059369] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:38.118 [2024-07-14 22:16:37.059726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:38.118 [2024-07-14 22:16:37.060077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:38.118 passed 00:08:38.118 Test: mem map registration ...[2024-07-14 22:16:37.126915] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:38.118 [2024-07-14 22:16:37.127112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:38.118 passed 00:08:38.376 Test: mem map adjacent registrations ...passed 00:08:38.376 00:08:38.376 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.376 suites 1 1 n/a 0 0 00:08:38.376 tests 4 4 4 0 0 00:08:38.376 asserts 152 152 152 0 n/a 00:08:38.376 00:08:38.376 Elapsed time = 0.289 seconds 00:08:38.376 00:08:38.376 real 0m0.324s 00:08:38.376 user 0m0.298s 00:08:38.376 sys 0m0.024s 00:08:38.376 22:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.376 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:08:38.376 ************************************ 00:08:38.376 END TEST env_memory 00:08:38.376 ************************************ 00:08:38.376 22:16:37 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:38.376 22:16:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:38.376 22:16:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.376 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:08:38.376 ************************************ 00:08:38.376 START TEST env_vtophys 00:08:38.376 ************************************ 00:08:38.376 22:16:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:38.376 EAL: lib.eal log level changed from notice to debug 00:08:38.376 EAL: Detected lcore 0 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 1 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 2 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 3 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 4 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 5 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 6 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 7 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 8 as core 0 on socket 0 00:08:38.376 EAL: Detected lcore 9 as core 0 on socket 0 00:08:38.376 EAL: Maximum logical cores by configuration: 128 00:08:38.376 EAL: Detected CPU lcores: 10 00:08:38.376 EAL: Detected NUMA nodes: 1 00:08:38.376 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:38.376 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:38.376 EAL: Checking presence of .so 'librte_eal.so' 00:08:38.376 EAL: Detected static linkage of DPDK 00:08:38.376 EAL: No shared files mode enabled, IPC will be disabled 00:08:38.376 EAL: Selected IOVA mode 'PA' 00:08:38.376 EAL: Probing VFIO support... 00:08:38.376 EAL: IOMMU type 1 (Type 1) is supported 00:08:38.376 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:38.376 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:38.376 EAL: VFIO support initialized 00:08:38.376 EAL: Ask a virtual area of 0x2e000 bytes 00:08:38.376 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:38.376 EAL: Setting up physically contiguous memory... 00:08:38.376 EAL: Setting maximum number of open files to 1048576 00:08:38.376 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:38.376 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:38.376 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.376 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:38.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.376 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.376 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:38.376 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:38.376 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.376 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:38.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.376 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.376 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:38.376 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:38.376 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.376 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:38.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.376 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.376 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:38.376 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:38.376 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.376 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:38.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.376 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.376 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:38.377 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:38.377 EAL: Hugepages will be freed exactly as allocated. 00:08:38.377 EAL: No shared files mode enabled, IPC is disabled 00:08:38.377 EAL: No shared files mode enabled, IPC is disabled 00:08:38.634 EAL: TSC frequency is ~2200000 KHz 00:08:38.634 EAL: Main lcore 0 is ready (tid=7f932bd50a40;cpuset=[0]) 00:08:38.634 EAL: Trying to obtain current memory policy. 00:08:38.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.634 EAL: Restoring previous memory policy: 0 00:08:38.634 EAL: request: mp_malloc_sync 00:08:38.634 EAL: No shared files mode enabled, IPC is disabled 00:08:38.634 EAL: Heap on socket 0 was expanded by 2MB 00:08:38.634 EAL: No shared files mode enabled, IPC is disabled 00:08:38.634 EAL: Mem event callback 'spdk:(nil)' registered 00:08:38.634 00:08:38.634 00:08:38.634 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.634 http://cunit.sourceforge.net/ 00:08:38.634 00:08:38.634 00:08:38.634 Suite: components_suite 00:08:38.892 Test: vtophys_malloc_test ...passed 00:08:38.892 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:38.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.892 EAL: Restoring previous memory policy: 0 00:08:38.892 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.892 EAL: request: mp_malloc_sync 00:08:38.892 EAL: No shared files mode enabled, IPC is disabled 00:08:38.892 EAL: Heap on socket 0 was expanded by 4MB 00:08:38.892 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.892 EAL: request: mp_malloc_sync 00:08:38.892 EAL: No shared files mode enabled, IPC is disabled 00:08:38.892 EAL: Heap on socket 0 was shrunk by 4MB 00:08:38.892 EAL: Trying to obtain current memory policy. 00:08:38.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.892 EAL: Restoring previous memory policy: 0 00:08:38.892 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.892 EAL: request: mp_malloc_sync 00:08:38.892 EAL: No shared files mode enabled, IPC is disabled 00:08:38.892 EAL: Heap on socket 0 was expanded by 6MB 00:08:38.892 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.892 EAL: request: mp_malloc_sync 00:08:38.892 EAL: No shared files mode enabled, IPC is disabled 00:08:38.892 EAL: Heap on socket 0 was shrunk by 6MB 00:08:39.150 EAL: Trying to obtain current memory policy. 00:08:39.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.150 EAL: Restoring previous memory policy: 0 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was expanded by 10MB 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was shrunk by 10MB 00:08:39.150 EAL: Trying to obtain current memory policy. 00:08:39.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.150 EAL: Restoring previous memory policy: 0 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was expanded by 18MB 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was shrunk by 18MB 00:08:39.150 EAL: Trying to obtain current memory policy. 00:08:39.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.150 EAL: Restoring previous memory policy: 0 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was expanded by 34MB 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was shrunk by 34MB 00:08:39.150 EAL: Trying to obtain current memory policy. 00:08:39.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.150 EAL: Restoring previous memory policy: 0 00:08:39.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.150 EAL: request: mp_malloc_sync 00:08:39.150 EAL: No shared files mode enabled, IPC is disabled 00:08:39.150 EAL: Heap on socket 0 was expanded by 66MB 00:08:39.408 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.408 EAL: request: mp_malloc_sync 00:08:39.408 EAL: No shared files mode enabled, IPC is disabled 00:08:39.408 EAL: Heap on socket 0 was shrunk by 66MB 00:08:39.408 EAL: Trying to obtain current memory policy. 00:08:39.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.408 EAL: Restoring previous memory policy: 0 00:08:39.408 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.408 EAL: request: mp_malloc_sync 00:08:39.408 EAL: No shared files mode enabled, IPC is disabled 00:08:39.408 EAL: Heap on socket 0 was expanded by 130MB 00:08:39.665 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.665 EAL: request: mp_malloc_sync 00:08:39.665 EAL: No shared files mode enabled, IPC is disabled 00:08:39.665 EAL: Heap on socket 0 was shrunk by 130MB 00:08:39.923 EAL: Trying to obtain current memory policy. 00:08:39.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.923 EAL: Restoring previous memory policy: 0 00:08:39.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.923 EAL: request: mp_malloc_sync 00:08:39.923 EAL: No shared files mode enabled, IPC is disabled 00:08:39.923 EAL: Heap on socket 0 was expanded by 258MB 00:08:40.491 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.491 EAL: request: mp_malloc_sync 00:08:40.491 EAL: No shared files mode enabled, IPC is disabled 00:08:40.491 EAL: Heap on socket 0 was shrunk by 258MB 00:08:40.750 EAL: Trying to obtain current memory policy. 00:08:40.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:41.008 EAL: Restoring previous memory policy: 0 00:08:41.008 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.008 EAL: request: mp_malloc_sync 00:08:41.008 EAL: No shared files mode enabled, IPC is disabled 00:08:41.008 EAL: Heap on socket 0 was expanded by 514MB 00:08:41.942 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.942 EAL: request: mp_malloc_sync 00:08:41.942 EAL: No shared files mode enabled, IPC is disabled 00:08:41.942 EAL: Heap on socket 0 was shrunk by 514MB 00:08:42.509 EAL: Trying to obtain current memory policy. 00:08:42.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:42.767 EAL: Restoring previous memory policy: 0 00:08:42.767 EAL: Calling mem event callback 'spdk:(nil)' 00:08:42.767 EAL: request: mp_malloc_sync 00:08:42.767 EAL: No shared files mode enabled, IPC is disabled 00:08:42.767 EAL: Heap on socket 0 was expanded by 1026MB 00:08:44.668 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.668 EAL: request: mp_malloc_sync 00:08:44.668 EAL: No shared files mode enabled, IPC is disabled 00:08:44.668 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:46.042 passed 00:08:46.042 00:08:46.042 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.042 suites 1 1 n/a 0 0 00:08:46.042 tests 2 2 2 0 0 00:08:46.042 asserts 6496 6496 6496 0 n/a 00:08:46.042 00:08:46.042 Elapsed time = 7.506 seconds 00:08:46.042 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.042 EAL: request: mp_malloc_sync 00:08:46.042 EAL: No shared files mode enabled, IPC is disabled 00:08:46.042 EAL: Heap on socket 0 was shrunk by 2MB 00:08:46.042 EAL: No shared files mode enabled, IPC is disabled 00:08:46.042 EAL: No shared files mode enabled, IPC is disabled 00:08:46.042 EAL: No shared files mode enabled, IPC is disabled 00:08:46.300 00:08:46.300 real 0m7.832s 00:08:46.300 user 0m6.688s 00:08:46.300 sys 0m0.983s 00:08:46.300 22:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.300 ************************************ 00:08:46.300 END TEST env_vtophys 00:08:46.300 ************************************ 00:08:46.300 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.300 22:16:45 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:46.300 22:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:46.300 22:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.300 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.300 ************************************ 00:08:46.300 START TEST env_pci 00:08:46.300 ************************************ 00:08:46.300 22:16:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:46.300 00:08:46.300 00:08:46.300 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.300 http://cunit.sourceforge.net/ 00:08:46.300 00:08:46.300 00:08:46.300 Suite: pci 00:08:46.300 Test: pci_hook ...[2024-07-14 22:16:45.199450] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 104920 has claimed it 00:08:46.301 EAL: Cannot find device (10000:00:01.0) 00:08:46.301 EAL: Failed to attach device on primary process 00:08:46.301 passed 00:08:46.301 00:08:46.301 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.301 suites 1 1 n/a 0 0 00:08:46.301 tests 1 1 1 0 0 00:08:46.301 asserts 25 25 25 0 n/a 00:08:46.301 00:08:46.301 Elapsed time = 0.006 seconds 00:08:46.301 00:08:46.301 real 0m0.090s 00:08:46.301 user 0m0.044s 00:08:46.301 sys 0m0.046s 00:08:46.301 22:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.301 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.301 ************************************ 00:08:46.301 END TEST env_pci 00:08:46.301 ************************************ 00:08:46.301 22:16:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:46.301 22:16:45 -- env/env.sh@15 -- # uname 00:08:46.301 22:16:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:46.301 22:16:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:46.301 22:16:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:46.301 22:16:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:46.301 22:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.301 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.301 ************************************ 00:08:46.301 START TEST env_dpdk_post_init 00:08:46.301 ************************************ 00:08:46.301 22:16:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:46.559 EAL: Detected CPU lcores: 10 00:08:46.559 EAL: Detected NUMA nodes: 1 00:08:46.559 EAL: Detected static linkage of DPDK 00:08:46.559 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:46.559 EAL: Selected IOVA mode 'PA' 00:08:46.559 EAL: VFIO support initialized 00:08:46.559 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:46.559 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:46.559 Starting DPDK initialization... 00:08:46.559 Starting SPDK post initialization... 00:08:46.559 SPDK NVMe probe 00:08:46.559 Attaching to 0000:00:06.0 00:08:46.559 Attached to 0000:00:06.0 00:08:46.559 Cleaning up... 00:08:46.559 00:08:46.559 real 0m0.268s 00:08:46.559 user 0m0.092s 00:08:46.559 sys 0m0.078s 00:08:46.559 ************************************ 00:08:46.559 END TEST env_dpdk_post_init 00:08:46.559 ************************************ 00:08:46.559 22:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.559 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.818 22:16:45 -- env/env.sh@26 -- # uname 00:08:46.818 22:16:45 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:46.818 22:16:45 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:46.818 22:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:46.818 22:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.818 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.818 ************************************ 00:08:46.818 START TEST env_mem_callbacks 00:08:46.818 ************************************ 00:08:46.818 22:16:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:46.818 EAL: Detected CPU lcores: 10 00:08:46.818 EAL: Detected NUMA nodes: 1 00:08:46.818 EAL: Detected static linkage of DPDK 00:08:46.818 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:46.818 EAL: Selected IOVA mode 'PA' 00:08:46.818 EAL: VFIO support initialized 00:08:46.818 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:46.818 00:08:46.818 00:08:46.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.818 http://cunit.sourceforge.net/ 00:08:46.818 00:08:46.818 00:08:46.818 Suite: memory 00:08:46.818 Test: test ... 00:08:46.818 register 0x200000200000 2097152 00:08:46.818 malloc 3145728 00:08:46.818 register 0x200000400000 4194304 00:08:46.818 buf 0x2000004fffc0 len 3145728 PASSED 00:08:46.818 malloc 64 00:08:46.818 buf 0x2000004ffec0 len 64 PASSED 00:08:46.818 malloc 4194304 00:08:46.818 register 0x200000800000 6291456 00:08:46.818 buf 0x2000009fffc0 len 4194304 PASSED 00:08:46.818 free 0x2000004fffc0 3145728 00:08:46.818 free 0x2000004ffec0 64 00:08:46.818 unregister 0x200000400000 4194304 PASSED 00:08:46.818 free 0x2000009fffc0 4194304 00:08:46.818 unregister 0x200000800000 6291456 PASSED 00:08:46.818 malloc 8388608 00:08:46.818 register 0x200000400000 10485760 00:08:47.077 buf 0x2000005fffc0 len 8388608 PASSED 00:08:47.077 free 0x2000005fffc0 8388608 00:08:47.077 unregister 0x200000400000 10485760 PASSED 00:08:47.077 passed 00:08:47.077 00:08:47.077 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.077 suites 1 1 n/a 0 0 00:08:47.077 tests 1 1 1 0 0 00:08:47.077 asserts 15 15 15 0 n/a 00:08:47.077 00:08:47.077 Elapsed time = 0.047 seconds 00:08:47.077 00:08:47.077 real 0m0.292s 00:08:47.077 user 0m0.106s 00:08:47.077 sys 0m0.086s 00:08:47.077 ************************************ 00:08:47.077 END TEST env_mem_callbacks 00:08:47.077 ************************************ 00:08:47.077 22:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.077 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.077 00:08:47.077 real 0m9.156s 00:08:47.077 user 0m7.430s 00:08:47.077 sys 0m1.347s 00:08:47.077 ************************************ 00:08:47.077 END TEST env 00:08:47.077 ************************************ 00:08:47.077 22:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.077 22:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.077 22:16:46 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:47.077 22:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:47.077 22:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.077 22:16:46 -- common/autotest_common.sh@10 -- # set +x 00:08:47.077 ************************************ 00:08:47.077 START TEST rpc 00:08:47.077 ************************************ 00:08:47.077 22:16:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:47.077 * Looking for test storage... 00:08:47.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:47.077 22:16:46 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:47.077 22:16:46 -- rpc/rpc.sh@65 -- # spdk_pid=105050 00:08:47.077 22:16:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:47.077 22:16:46 -- rpc/rpc.sh@67 -- # waitforlisten 105050 00:08:47.077 22:16:46 -- common/autotest_common.sh@819 -- # '[' -z 105050 ']' 00:08:47.077 22:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.077 22:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:47.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.077 22:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.077 22:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:47.077 22:16:46 -- common/autotest_common.sh@10 -- # set +x 00:08:47.336 [2024-07-14 22:16:46.167131] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:47.336 [2024-07-14 22:16:46.167297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105050 ] 00:08:47.336 [2024-07-14 22:16:46.316905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.594 [2024-07-14 22:16:46.489416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:47.594 [2024-07-14 22:16:46.489664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:47.594 [2024-07-14 22:16:46.489701] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 105050' to capture a snapshot of events at runtime. 00:08:47.594 [2024-07-14 22:16:46.489720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid105050 for offline analysis/debug. 00:08:47.594 [2024-07-14 22:16:46.489816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.967 22:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:48.967 22:16:47 -- common/autotest_common.sh@852 -- # return 0 00:08:48.967 22:16:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:48.967 22:16:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:48.967 22:16:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:48.967 22:16:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:48.967 22:16:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.967 22:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.967 22:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.968 ************************************ 00:08:48.968 START TEST rpc_integrity 00:08:48.968 ************************************ 00:08:48.968 22:16:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:48.968 22:16:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:48.968 22:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.968 22:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.968 22:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.968 22:16:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:48.968 22:16:47 -- rpc/rpc.sh@13 -- # jq length 00:08:48.968 22:16:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:48.968 22:16:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:48.968 22:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.968 22:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.968 22:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.968 22:16:47 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:48.968 22:16:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:48.968 22:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.968 22:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.968 22:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.968 22:16:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:48.968 { 00:08:48.968 "name": "Malloc0", 00:08:48.968 "aliases": [ 00:08:48.968 "ac4d06b1-71d9-495a-8f3f-7280ab910986" 00:08:48.968 ], 00:08:48.968 "product_name": "Malloc disk", 00:08:48.968 "block_size": 512, 00:08:48.968 "num_blocks": 16384, 00:08:48.968 "uuid": "ac4d06b1-71d9-495a-8f3f-7280ab910986", 00:08:48.968 "assigned_rate_limits": { 00:08:48.968 "rw_ios_per_sec": 0, 00:08:48.968 "rw_mbytes_per_sec": 0, 00:08:48.968 "r_mbytes_per_sec": 0, 00:08:48.968 "w_mbytes_per_sec": 0 00:08:48.968 }, 00:08:48.968 "claimed": false, 00:08:48.968 "zoned": false, 00:08:48.968 "supported_io_types": { 00:08:48.968 "read": true, 00:08:48.968 "write": true, 00:08:48.968 "unmap": true, 00:08:48.968 "write_zeroes": true, 00:08:48.968 "flush": true, 00:08:48.968 "reset": true, 00:08:48.968 "compare": false, 00:08:48.968 "compare_and_write": false, 00:08:48.968 "abort": true, 00:08:48.968 "nvme_admin": false, 00:08:48.968 "nvme_io": false 00:08:48.968 }, 00:08:48.968 "memory_domains": [ 00:08:48.968 { 00:08:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.968 "dma_device_type": 2 00:08:48.968 } 00:08:48.968 ], 00:08:48.968 "driver_specific": {} 00:08:48.968 } 00:08:48.968 ]' 00:08:48.968 22:16:47 -- rpc/rpc.sh@17 -- # jq length 00:08:49.227 22:16:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:49.227 22:16:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:49.227 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 [2024-07-14 22:16:48.047141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:49.227 [2024-07-14 22:16:48.047251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.227 [2024-07-14 22:16:48.047291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:49.227 [2024-07-14 22:16:48.047313] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.227 [2024-07-14 22:16:48.049763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.227 [2024-07-14 22:16:48.049853] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:49.227 Passthru0 00:08:49.227 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.227 22:16:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:49.227 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.227 22:16:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:49.227 { 00:08:49.227 "name": "Malloc0", 00:08:49.227 "aliases": [ 00:08:49.227 "ac4d06b1-71d9-495a-8f3f-7280ab910986" 00:08:49.227 ], 00:08:49.227 "product_name": "Malloc disk", 00:08:49.227 "block_size": 512, 00:08:49.227 "num_blocks": 16384, 00:08:49.227 "uuid": "ac4d06b1-71d9-495a-8f3f-7280ab910986", 00:08:49.227 "assigned_rate_limits": { 00:08:49.227 "rw_ios_per_sec": 0, 00:08:49.227 "rw_mbytes_per_sec": 0, 00:08:49.227 "r_mbytes_per_sec": 0, 00:08:49.227 "w_mbytes_per_sec": 0 00:08:49.227 }, 00:08:49.227 "claimed": true, 00:08:49.227 "claim_type": "exclusive_write", 00:08:49.227 "zoned": false, 00:08:49.227 "supported_io_types": { 00:08:49.227 "read": true, 00:08:49.227 "write": true, 00:08:49.227 "unmap": true, 00:08:49.227 "write_zeroes": true, 00:08:49.227 "flush": true, 00:08:49.227 "reset": true, 00:08:49.227 "compare": false, 00:08:49.227 "compare_and_write": false, 00:08:49.227 "abort": true, 00:08:49.227 "nvme_admin": false, 00:08:49.227 "nvme_io": false 00:08:49.227 }, 00:08:49.227 "memory_domains": [ 00:08:49.227 { 00:08:49.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.227 "dma_device_type": 2 00:08:49.227 } 00:08:49.227 ], 00:08:49.227 "driver_specific": {} 00:08:49.227 }, 00:08:49.227 { 00:08:49.227 "name": "Passthru0", 00:08:49.227 "aliases": [ 00:08:49.227 "73142a5e-a267-58a9-8743-e507648d98ec" 00:08:49.227 ], 00:08:49.227 "product_name": "passthru", 00:08:49.227 "block_size": 512, 00:08:49.227 "num_blocks": 16384, 00:08:49.227 "uuid": "73142a5e-a267-58a9-8743-e507648d98ec", 00:08:49.227 "assigned_rate_limits": { 00:08:49.227 "rw_ios_per_sec": 0, 00:08:49.227 "rw_mbytes_per_sec": 0, 00:08:49.227 "r_mbytes_per_sec": 0, 00:08:49.227 "w_mbytes_per_sec": 0 00:08:49.227 }, 00:08:49.227 "claimed": false, 00:08:49.227 "zoned": false, 00:08:49.227 "supported_io_types": { 00:08:49.227 "read": true, 00:08:49.227 "write": true, 00:08:49.227 "unmap": true, 00:08:49.227 "write_zeroes": true, 00:08:49.227 "flush": true, 00:08:49.227 "reset": true, 00:08:49.227 "compare": false, 00:08:49.227 "compare_and_write": false, 00:08:49.227 "abort": true, 00:08:49.227 "nvme_admin": false, 00:08:49.227 "nvme_io": false 00:08:49.227 }, 00:08:49.227 "memory_domains": [ 00:08:49.227 { 00:08:49.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.227 "dma_device_type": 2 00:08:49.227 } 00:08:49.227 ], 00:08:49.227 "driver_specific": { 00:08:49.227 "passthru": { 00:08:49.227 "name": "Passthru0", 00:08:49.227 "base_bdev_name": "Malloc0" 00:08:49.227 } 00:08:49.227 } 00:08:49.227 } 00:08:49.227 ]' 00:08:49.227 22:16:48 -- rpc/rpc.sh@21 -- # jq length 00:08:49.227 22:16:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:49.227 22:16:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:49.227 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.227 22:16:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:49.227 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.227 22:16:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:49.227 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.227 22:16:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:49.227 22:16:48 -- rpc/rpc.sh@26 -- # jq length 00:08:49.227 22:16:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:49.227 ************************************ 00:08:49.227 END TEST rpc_integrity 00:08:49.227 ************************************ 00:08:49.227 00:08:49.227 real 0m0.334s 00:08:49.227 user 0m0.225s 00:08:49.227 sys 0m0.027s 00:08:49.227 22:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 22:16:48 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:49.227 22:16:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:49.227 22:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 ************************************ 00:08:49.227 START TEST rpc_plugins 00:08:49.227 ************************************ 00:08:49.227 22:16:48 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:49.227 22:16:48 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:49.227 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.227 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.227 22:16:48 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:49.486 22:16:48 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:49.486 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.486 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.486 22:16:48 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:49.486 { 00:08:49.486 "name": "Malloc1", 00:08:49.486 "aliases": [ 00:08:49.486 "d6ae6128-2808-4c74-a89f-ad13eb23c7a9" 00:08:49.486 ], 00:08:49.486 "product_name": "Malloc disk", 00:08:49.486 "block_size": 4096, 00:08:49.486 "num_blocks": 256, 00:08:49.486 "uuid": "d6ae6128-2808-4c74-a89f-ad13eb23c7a9", 00:08:49.486 "assigned_rate_limits": { 00:08:49.486 "rw_ios_per_sec": 0, 00:08:49.486 "rw_mbytes_per_sec": 0, 00:08:49.486 "r_mbytes_per_sec": 0, 00:08:49.486 "w_mbytes_per_sec": 0 00:08:49.486 }, 00:08:49.486 "claimed": false, 00:08:49.486 "zoned": false, 00:08:49.486 "supported_io_types": { 00:08:49.486 "read": true, 00:08:49.486 "write": true, 00:08:49.486 "unmap": true, 00:08:49.486 "write_zeroes": true, 00:08:49.486 "flush": true, 00:08:49.486 "reset": true, 00:08:49.486 "compare": false, 00:08:49.486 "compare_and_write": false, 00:08:49.486 "abort": true, 00:08:49.486 "nvme_admin": false, 00:08:49.486 "nvme_io": false 00:08:49.486 }, 00:08:49.486 "memory_domains": [ 00:08:49.486 { 00:08:49.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.486 "dma_device_type": 2 00:08:49.486 } 00:08:49.486 ], 00:08:49.486 "driver_specific": {} 00:08:49.486 } 00:08:49.486 ]' 00:08:49.486 22:16:48 -- rpc/rpc.sh@32 -- # jq length 00:08:49.486 22:16:48 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:49.486 22:16:48 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:49.486 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.486 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.486 22:16:48 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:49.486 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.486 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.486 22:16:48 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:49.486 22:16:48 -- rpc/rpc.sh@36 -- # jq length 00:08:49.486 22:16:48 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:49.486 00:08:49.486 real 0m0.151s 00:08:49.486 user 0m0.107s 00:08:49.486 sys 0m0.015s 00:08:49.486 ************************************ 00:08:49.486 END TEST rpc_plugins 00:08:49.486 ************************************ 00:08:49.486 22:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.486 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 22:16:48 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:49.486 22:16:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:49.486 22:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.486 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 ************************************ 00:08:49.486 START TEST rpc_trace_cmd_test 00:08:49.486 ************************************ 00:08:49.486 22:16:48 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:49.486 22:16:48 -- rpc/rpc.sh@40 -- # local info 00:08:49.486 22:16:48 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:49.486 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.486 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.486 22:16:48 -- rpc/rpc.sh@42 -- # info='{ 00:08:49.486 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid105050", 00:08:49.486 "tpoint_group_mask": "0x8", 00:08:49.486 "iscsi_conn": { 00:08:49.486 "mask": "0x2", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "scsi": { 00:08:49.486 "mask": "0x4", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "bdev": { 00:08:49.486 "mask": "0x8", 00:08:49.486 "tpoint_mask": "0xffffffffffffffff" 00:08:49.486 }, 00:08:49.486 "nvmf_rdma": { 00:08:49.486 "mask": "0x10", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "nvmf_tcp": { 00:08:49.486 "mask": "0x20", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "ftl": { 00:08:49.486 "mask": "0x40", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "blobfs": { 00:08:49.486 "mask": "0x80", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "dsa": { 00:08:49.486 "mask": "0x200", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "thread": { 00:08:49.486 "mask": "0x400", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "nvme_pcie": { 00:08:49.486 "mask": "0x800", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "iaa": { 00:08:49.486 "mask": "0x1000", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "nvme_tcp": { 00:08:49.486 "mask": "0x2000", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 }, 00:08:49.486 "bdev_nvme": { 00:08:49.486 "mask": "0x4000", 00:08:49.486 "tpoint_mask": "0x0" 00:08:49.486 } 00:08:49.486 }' 00:08:49.486 22:16:48 -- rpc/rpc.sh@43 -- # jq length 00:08:49.486 22:16:48 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:49.486 22:16:48 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:49.745 22:16:48 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:49.745 22:16:48 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:49.745 22:16:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:49.745 22:16:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:49.745 22:16:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:49.745 22:16:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:49.745 22:16:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:49.745 ************************************ 00:08:49.745 END TEST rpc_trace_cmd_test 00:08:49.745 ************************************ 00:08:49.745 00:08:49.745 real 0m0.290s 00:08:49.745 user 0m0.262s 00:08:49.745 sys 0m0.021s 00:08:49.745 22:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.745 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 22:16:48 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:50.003 22:16:48 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:50.003 22:16:48 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:50.003 22:16:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.003 22:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.003 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 ************************************ 00:08:50.003 START TEST rpc_daemon_integrity 00:08:50.003 ************************************ 00:08:50.003 22:16:48 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:50.003 22:16:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:50.003 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.003 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.003 22:16:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:50.003 22:16:48 -- rpc/rpc.sh@13 -- # jq length 00:08:50.003 22:16:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:50.003 22:16:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:50.003 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.003 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.003 22:16:48 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:50.003 22:16:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:50.003 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.003 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.003 22:16:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:50.003 { 00:08:50.003 "name": "Malloc2", 00:08:50.003 "aliases": [ 00:08:50.003 "5059694a-2b47-44dc-9aa1-26fb7ade4875" 00:08:50.003 ], 00:08:50.003 "product_name": "Malloc disk", 00:08:50.003 "block_size": 512, 00:08:50.003 "num_blocks": 16384, 00:08:50.003 "uuid": "5059694a-2b47-44dc-9aa1-26fb7ade4875", 00:08:50.003 "assigned_rate_limits": { 00:08:50.003 "rw_ios_per_sec": 0, 00:08:50.003 "rw_mbytes_per_sec": 0, 00:08:50.003 "r_mbytes_per_sec": 0, 00:08:50.003 "w_mbytes_per_sec": 0 00:08:50.003 }, 00:08:50.003 "claimed": false, 00:08:50.003 "zoned": false, 00:08:50.003 "supported_io_types": { 00:08:50.003 "read": true, 00:08:50.003 "write": true, 00:08:50.003 "unmap": true, 00:08:50.003 "write_zeroes": true, 00:08:50.003 "flush": true, 00:08:50.003 "reset": true, 00:08:50.003 "compare": false, 00:08:50.003 "compare_and_write": false, 00:08:50.003 "abort": true, 00:08:50.003 "nvme_admin": false, 00:08:50.003 "nvme_io": false 00:08:50.003 }, 00:08:50.003 "memory_domains": [ 00:08:50.003 { 00:08:50.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.003 "dma_device_type": 2 00:08:50.003 } 00:08:50.003 ], 00:08:50.003 "driver_specific": {} 00:08:50.003 } 00:08:50.003 ]' 00:08:50.003 22:16:48 -- rpc/rpc.sh@17 -- # jq length 00:08:50.003 22:16:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:50.003 22:16:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:50.003 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.003 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 [2024-07-14 22:16:48.993314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:50.003 [2024-07-14 22:16:48.993415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.003 [2024-07-14 22:16:48.993464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:50.003 [2024-07-14 22:16:48.993484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.003 [2024-07-14 22:16:48.995692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.003 [2024-07-14 22:16:48.995826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:50.003 Passthru0 00:08:50.003 22:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.004 22:16:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:50.004 22:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.004 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.004 22:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.004 22:16:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:50.004 { 00:08:50.004 "name": "Malloc2", 00:08:50.004 "aliases": [ 00:08:50.004 "5059694a-2b47-44dc-9aa1-26fb7ade4875" 00:08:50.004 ], 00:08:50.004 "product_name": "Malloc disk", 00:08:50.004 "block_size": 512, 00:08:50.004 "num_blocks": 16384, 00:08:50.004 "uuid": "5059694a-2b47-44dc-9aa1-26fb7ade4875", 00:08:50.004 "assigned_rate_limits": { 00:08:50.004 "rw_ios_per_sec": 0, 00:08:50.004 "rw_mbytes_per_sec": 0, 00:08:50.004 "r_mbytes_per_sec": 0, 00:08:50.004 "w_mbytes_per_sec": 0 00:08:50.004 }, 00:08:50.004 "claimed": true, 00:08:50.004 "claim_type": "exclusive_write", 00:08:50.004 "zoned": false, 00:08:50.004 "supported_io_types": { 00:08:50.004 "read": true, 00:08:50.004 "write": true, 00:08:50.004 "unmap": true, 00:08:50.004 "write_zeroes": true, 00:08:50.004 "flush": true, 00:08:50.004 "reset": true, 00:08:50.004 "compare": false, 00:08:50.004 "compare_and_write": false, 00:08:50.004 "abort": true, 00:08:50.004 "nvme_admin": false, 00:08:50.004 "nvme_io": false 00:08:50.004 }, 00:08:50.004 "memory_domains": [ 00:08:50.004 { 00:08:50.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.004 "dma_device_type": 2 00:08:50.004 } 00:08:50.004 ], 00:08:50.004 "driver_specific": {} 00:08:50.004 }, 00:08:50.004 { 00:08:50.004 "name": "Passthru0", 00:08:50.004 "aliases": [ 00:08:50.004 "bb1d674d-9306-52ef-ad69-a6772d1a5deb" 00:08:50.004 ], 00:08:50.004 "product_name": "passthru", 00:08:50.004 "block_size": 512, 00:08:50.004 "num_blocks": 16384, 00:08:50.004 "uuid": "bb1d674d-9306-52ef-ad69-a6772d1a5deb", 00:08:50.004 "assigned_rate_limits": { 00:08:50.004 "rw_ios_per_sec": 0, 00:08:50.004 "rw_mbytes_per_sec": 0, 00:08:50.004 "r_mbytes_per_sec": 0, 00:08:50.004 "w_mbytes_per_sec": 0 00:08:50.004 }, 00:08:50.004 "claimed": false, 00:08:50.004 "zoned": false, 00:08:50.004 "supported_io_types": { 00:08:50.004 "read": true, 00:08:50.004 "write": true, 00:08:50.004 "unmap": true, 00:08:50.004 "write_zeroes": true, 00:08:50.004 "flush": true, 00:08:50.004 "reset": true, 00:08:50.004 "compare": false, 00:08:50.004 "compare_and_write": false, 00:08:50.004 "abort": true, 00:08:50.004 "nvme_admin": false, 00:08:50.004 "nvme_io": false 00:08:50.004 }, 00:08:50.004 "memory_domains": [ 00:08:50.004 { 00:08:50.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.004 "dma_device_type": 2 00:08:50.004 } 00:08:50.004 ], 00:08:50.004 "driver_specific": { 00:08:50.004 "passthru": { 00:08:50.004 "name": "Passthru0", 00:08:50.004 "base_bdev_name": "Malloc2" 00:08:50.004 } 00:08:50.004 } 00:08:50.004 } 00:08:50.004 ]' 00:08:50.004 22:16:49 -- rpc/rpc.sh@21 -- # jq length 00:08:50.261 22:16:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:50.261 22:16:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:50.261 22:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.261 22:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.261 22:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.261 22:16:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:50.261 22:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.261 22:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.261 22:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.261 22:16:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:50.261 22:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.261 22:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.261 22:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.261 22:16:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:50.261 22:16:49 -- rpc/rpc.sh@26 -- # jq length 00:08:50.261 22:16:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:50.261 00:08:50.261 real 0m0.348s 00:08:50.261 user 0m0.238s 00:08:50.261 sys 0m0.028s 00:08:50.261 ************************************ 00:08:50.261 END TEST rpc_daemon_integrity 00:08:50.261 ************************************ 00:08:50.261 22:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.261 22:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.261 22:16:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:50.261 22:16:49 -- rpc/rpc.sh@84 -- # killprocess 105050 00:08:50.261 22:16:49 -- common/autotest_common.sh@926 -- # '[' -z 105050 ']' 00:08:50.261 22:16:49 -- common/autotest_common.sh@930 -- # kill -0 105050 00:08:50.261 22:16:49 -- common/autotest_common.sh@931 -- # uname 00:08:50.261 22:16:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:50.261 22:16:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105050 00:08:50.261 22:16:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:50.261 killing process with pid 105050 00:08:50.261 22:16:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:50.261 22:16:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105050' 00:08:50.261 22:16:49 -- common/autotest_common.sh@945 -- # kill 105050 00:08:50.261 22:16:49 -- common/autotest_common.sh@950 -- # wait 105050 00:08:52.158 00:08:52.158 real 0m5.079s 00:08:52.158 user 0m6.222s 00:08:52.158 sys 0m0.661s 00:08:52.158 22:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.158 ************************************ 00:08:52.158 END TEST rpc 00:08:52.158 ************************************ 00:08:52.158 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.158 22:16:51 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:52.158 22:16:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:52.158 22:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.158 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.158 ************************************ 00:08:52.158 START TEST rpc_client 00:08:52.158 ************************************ 00:08:52.158 22:16:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:52.158 * Looking for test storage... 00:08:52.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:52.417 22:16:51 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:52.417 OK 00:08:52.417 22:16:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:52.417 ************************************ 00:08:52.417 END TEST rpc_client 00:08:52.417 ************************************ 00:08:52.417 00:08:52.417 real 0m0.142s 00:08:52.417 user 0m0.072s 00:08:52.417 sys 0m0.081s 00:08:52.417 22:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.417 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.417 22:16:51 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:52.417 22:16:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:52.417 22:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.417 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.417 ************************************ 00:08:52.417 START TEST json_config 00:08:52.417 ************************************ 00:08:52.417 22:16:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:52.417 22:16:51 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.417 22:16:51 -- nvmf/common.sh@7 -- # uname -s 00:08:52.417 22:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.417 22:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.417 22:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.417 22:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.417 22:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.417 22:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.417 22:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.417 22:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.417 22:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.417 22:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.417 22:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dc5c259f-a827-4530-957e-8cca3825f032 00:08:52.417 22:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=dc5c259f-a827-4530-957e-8cca3825f032 00:08:52.417 22:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.417 22:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.417 22:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:52.417 22:16:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.417 22:16:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.417 22:16:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.417 22:16:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.417 22:16:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:52.417 22:16:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:52.417 22:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:52.417 22:16:51 -- paths/export.sh@5 -- # export PATH 00:08:52.417 22:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:52.417 22:16:51 -- nvmf/common.sh@46 -- # : 0 00:08:52.417 22:16:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:52.417 22:16:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:52.417 22:16:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:52.417 22:16:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.417 22:16:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.417 22:16:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:52.417 22:16:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:52.417 22:16:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:52.417 22:16:51 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:52.417 22:16:51 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:08:52.417 22:16:51 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:52.417 22:16:51 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:08:52.417 22:16:51 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:52.417 22:16:51 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:08:52.417 22:16:51 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:52.417 22:16:51 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:08:52.417 22:16:51 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:52.417 22:16:51 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:52.417 22:16:51 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:52.417 22:16:51 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:52.417 INFO: JSON configuration test init 00:08:52.417 22:16:51 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:52.417 22:16:51 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:52.417 22:16:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:52.417 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.417 22:16:51 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:52.417 22:16:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:52.417 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.417 Waiting for target to run... 00:08:52.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:52.417 22:16:51 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:52.417 22:16:51 -- json_config/json_config.sh@98 -- # local app=target 00:08:52.417 22:16:51 -- json_config/json_config.sh@99 -- # shift 00:08:52.417 22:16:51 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:52.417 22:16:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:52.417 22:16:51 -- json_config/json_config.sh@111 -- # app_pid[$app]=105342 00:08:52.417 22:16:51 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:52.417 22:16:51 -- json_config/json_config.sh@114 -- # waitforlisten 105342 /var/tmp/spdk_tgt.sock 00:08:52.417 22:16:51 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:52.417 22:16:51 -- common/autotest_common.sh@819 -- # '[' -z 105342 ']' 00:08:52.417 22:16:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:52.417 22:16:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.417 22:16:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:52.417 22:16:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.417 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.676 [2024-07-14 22:16:51.491622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:52.676 [2024-07-14 22:16:51.491844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105342 ] 00:08:52.934 [2024-07-14 22:16:51.947325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.193 [2024-07-14 22:16:52.129941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.193 [2024-07-14 22:16:52.130246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.451 22:16:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:53.451 00:08:53.451 22:16:52 -- common/autotest_common.sh@852 -- # return 0 00:08:53.451 22:16:52 -- json_config/json_config.sh@115 -- # echo '' 00:08:53.451 22:16:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:53.451 22:16:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:53.451 22:16:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:53.451 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.451 22:16:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:53.451 22:16:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:53.451 22:16:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:53.451 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.451 22:16:52 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:53.451 22:16:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:53.451 22:16:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:54.396 22:16:53 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:54.396 22:16:53 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:54.396 22:16:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:54.397 22:16:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.397 22:16:53 -- json_config/json_config.sh@48 -- # local ret=0 00:08:54.397 22:16:53 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:08:54.397 22:16:53 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:54.397 22:16:53 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:08:54.397 22:16:53 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:54.397 22:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:54.397 22:16:53 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:54.658 22:16:53 -- json_config/json_config.sh@51 -- # local get_types 00:08:54.658 22:16:53 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:54.658 22:16:53 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:54.658 22:16:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:54.658 22:16:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 22:16:53 -- json_config/json_config.sh@58 -- # return 0 00:08:54.658 22:16:53 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:54.658 22:16:53 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:54.658 22:16:53 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:54.658 22:16:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:54.658 22:16:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 22:16:53 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:54.658 22:16:53 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:54.658 22:16:53 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:54.658 22:16:53 -- json_config/json_config.sh@164 -- # get_notifications 00:08:54.658 22:16:53 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:54.658 22:16:53 -- json_config/json_config.sh@64 -- # IFS=: 00:08:54.658 22:16:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:54.658 22:16:53 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:54.658 22:16:53 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:54.658 22:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:54.917 22:16:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:54.917 22:16:53 -- json_config/json_config.sh@64 -- # IFS=: 00:08:54.917 22:16:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:54.917 22:16:53 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:54.917 22:16:53 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:54.917 22:16:53 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:54.917 22:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:55.175 Nvme0n1p0 Nvme0n1p1 00:08:55.175 22:16:54 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:55.175 22:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:55.433 [2024-07-14 22:16:54.308163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:55.433 [2024-07-14 22:16:54.308306] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:55.433 00:08:55.433 22:16:54 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:55.433 22:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:55.691 Malloc3 00:08:55.691 22:16:54 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:55.691 22:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:55.691 [2024-07-14 22:16:54.691254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:55.691 [2024-07-14 22:16:54.691394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.691 [2024-07-14 22:16:54.691434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:55.691 [2024-07-14 22:16:54.691465] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.691 [2024-07-14 22:16:54.693824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.691 [2024-07-14 22:16:54.693897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:55.691 PTBdevFromMalloc3 00:08:55.691 22:16:54 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:55.691 22:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:55.950 Null0 00:08:55.950 22:16:54 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:55.950 22:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:56.209 Malloc0 00:08:56.467 22:16:55 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:56.467 22:16:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:56.467 Malloc1 00:08:56.725 22:16:55 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:56.725 22:16:55 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:56.983 102400+0 records in 00:08:56.983 102400+0 records out 00:08:56.983 104857600 bytes (105 MB, 100 MiB) copied, 0.265609 s, 395 MB/s 00:08:56.983 22:16:55 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:56.983 22:16:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:57.241 aio_disk 00:08:57.241 22:16:56 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:57.241 22:16:56 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:57.241 22:16:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:57.501 8c02ed17-e3c4-424e-8a5f-4e37fef9a31b 00:08:57.501 22:16:56 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:57.501 22:16:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:57.501 22:16:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:57.501 22:16:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:57.501 22:16:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:57.759 22:16:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:57.759 22:16:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:58.017 22:16:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:58.017 22:16:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:58.276 22:16:57 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:58.276 22:16:57 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:58.276 22:16:57 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:59e11d25-6b60-42cb-a106-d36c3793d7ed bdev_register:20ab5603-852e-4721-9f9b-88f06a9e2ebb bdev_register:201313db-9f4e-48a6-8215-3adc75872765 bdev_register:f5854532-ad35-4208-87ff-e8c80fc61bda 00:08:58.276 22:16:57 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:58.276 22:16:57 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:58.276 22:16:57 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:58.276 22:16:57 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:59e11d25-6b60-42cb-a106-d36c3793d7ed bdev_register:20ab5603-852e-4721-9f9b-88f06a9e2ebb bdev_register:201313db-9f4e-48a6-8215-3adc75872765 bdev_register:f5854532-ad35-4208-87ff-e8c80fc61bda 00:08:58.276 22:16:57 -- json_config/json_config.sh@74 -- # sort 00:08:58.276 22:16:57 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:58.276 22:16:57 -- json_config/json_config.sh@75 -- # get_notifications 00:08:58.276 22:16:57 -- json_config/json_config.sh@75 -- # sort 00:08:58.276 22:16:57 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:58.276 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.276 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.276 22:16:57 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:58.276 22:16:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:58.276 22:16:57 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:59e11d25-6b60-42cb-a106-d36c3793d7ed 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:20ab5603-852e-4721-9f9b-88f06a9e2ebb 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:201313db-9f4e-48a6-8215-3adc75872765 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@65 -- # echo bdev_register:f5854532-ad35-4208-87ff-e8c80fc61bda 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # IFS=: 00:08:58.535 22:16:57 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:58.535 22:16:57 -- json_config/json_config.sh@77 -- # [[ bdev_register:201313db-9f4e-48a6-8215-3adc75872765 bdev_register:20ab5603-852e-4721-9f9b-88f06a9e2ebb bdev_register:59e11d25-6b60-42cb-a106-d36c3793d7ed bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:f5854532-ad35-4208-87ff-e8c80fc61bda != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\0\1\3\1\3\d\b\-\9\f\4\e\-\4\8\a\6\-\8\2\1\5\-\3\a\d\c\7\5\8\7\2\7\6\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\0\a\b\5\6\0\3\-\8\5\2\e\-\4\7\2\1\-\9\f\9\b\-\8\8\f\0\6\a\9\e\2\e\b\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\9\e\1\1\d\2\5\-\6\b\6\0\-\4\2\c\b\-\a\1\0\6\-\d\3\6\c\3\7\9\3\d\7\e\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\5\8\5\4\5\3\2\-\a\d\3\5\-\4\2\0\8\-\8\7\f\f\-\e\8\c\8\0\f\c\6\1\b\d\a ]] 00:08:58.535 22:16:57 -- json_config/json_config.sh@89 -- # cat 00:08:58.535 22:16:57 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:201313db-9f4e-48a6-8215-3adc75872765 bdev_register:20ab5603-852e-4721-9f9b-88f06a9e2ebb bdev_register:59e11d25-6b60-42cb-a106-d36c3793d7ed bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:f5854532-ad35-4208-87ff-e8c80fc61bda 00:08:58.535 Expected events matched: 00:08:58.535 bdev_register:201313db-9f4e-48a6-8215-3adc75872765 00:08:58.535 bdev_register:20ab5603-852e-4721-9f9b-88f06a9e2ebb 00:08:58.535 bdev_register:59e11d25-6b60-42cb-a106-d36c3793d7ed 00:08:58.535 bdev_register:Malloc0 00:08:58.535 bdev_register:Malloc0p0 00:08:58.535 bdev_register:Malloc0p1 00:08:58.535 bdev_register:Malloc0p2 00:08:58.535 bdev_register:Malloc1 00:08:58.535 bdev_register:Malloc3 00:08:58.535 bdev_register:Null0 00:08:58.535 bdev_register:Nvme0n1 00:08:58.535 bdev_register:Nvme0n1p0 00:08:58.535 bdev_register:Nvme0n1p1 00:08:58.535 bdev_register:PTBdevFromMalloc3 00:08:58.535 bdev_register:aio_disk 00:08:58.535 bdev_register:f5854532-ad35-4208-87ff-e8c80fc61bda 00:08:58.535 22:16:57 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:58.535 22:16:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.535 22:16:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 22:16:57 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:58.535 22:16:57 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:58.535 22:16:57 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:58.535 22:16:57 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:58.535 22:16:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.535 22:16:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 22:16:57 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:58.536 22:16:57 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:58.536 22:16:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:58.793 MallocBdevForConfigChangeCheck 00:08:58.794 22:16:57 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:58.794 22:16:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.794 22:16:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.794 22:16:57 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:58.794 22:16:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:59.377 INFO: shutting down applications... 00:08:59.377 22:16:58 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:59.377 22:16:58 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:59.377 22:16:58 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:59.377 22:16:58 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:59.377 22:16:58 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:59.377 [2024-07-14 22:16:58.337748] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:59.634 Calling clear_vhost_scsi_subsystem 00:08:59.634 Calling clear_iscsi_subsystem 00:08:59.634 Calling clear_vhost_blk_subsystem 00:08:59.634 Calling clear_nbd_subsystem 00:08:59.634 Calling clear_nvmf_subsystem 00:08:59.634 Calling clear_bdev_subsystem 00:08:59.634 Calling clear_accel_subsystem 00:08:59.634 Calling clear_iobuf_subsystem 00:08:59.634 Calling clear_sock_subsystem 00:08:59.634 Calling clear_vmd_subsystem 00:08:59.634 Calling clear_scheduler_subsystem 00:08:59.634 22:16:58 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:59.634 22:16:58 -- json_config/json_config.sh@396 -- # count=100 00:08:59.634 22:16:58 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:59.634 22:16:58 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:59.634 22:16:58 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:59.634 22:16:58 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:59.891 22:16:58 -- json_config/json_config.sh@398 -- # break 00:08:59.891 22:16:58 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:59.891 22:16:58 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:59.891 22:16:58 -- json_config/json_config.sh@120 -- # local app=target 00:08:59.891 22:16:58 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:59.891 22:16:58 -- json_config/json_config.sh@124 -- # [[ -n 105342 ]] 00:08:59.891 22:16:58 -- json_config/json_config.sh@127 -- # kill -SIGINT 105342 00:08:59.891 22:16:58 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:59.891 22:16:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:59.891 22:16:58 -- json_config/json_config.sh@130 -- # kill -0 105342 00:08:59.891 22:16:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:00.454 22:16:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:00.454 22:16:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:00.454 22:16:59 -- json_config/json_config.sh@130 -- # kill -0 105342 00:09:00.455 22:16:59 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:01.019 22:16:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:01.019 22:16:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:01.019 22:16:59 -- json_config/json_config.sh@130 -- # kill -0 105342 00:09:01.019 22:16:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:01.019 SPDK target shutdown done 00:09:01.019 22:16:59 -- json_config/json_config.sh@132 -- # break 00:09:01.019 22:16:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:01.019 22:16:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:01.019 INFO: relaunching applications... 00:09:01.019 22:16:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:01.019 22:16:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:01.019 22:16:59 -- json_config/json_config.sh@98 -- # local app=target 00:09:01.019 22:16:59 -- json_config/json_config.sh@99 -- # shift 00:09:01.019 22:16:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:01.019 22:16:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:01.019 22:16:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:01.019 22:16:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:01.019 22:16:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:01.019 22:16:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=105612 00:09:01.019 22:16:59 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:01.019 Waiting for target to run... 00:09:01.019 22:16:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:01.019 22:16:59 -- json_config/json_config.sh@114 -- # waitforlisten 105612 /var/tmp/spdk_tgt.sock 00:09:01.019 22:16:59 -- common/autotest_common.sh@819 -- # '[' -z 105612 ']' 00:09:01.019 22:16:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:01.019 22:16:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:01.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:01.019 22:16:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:01.019 22:16:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:01.019 22:16:59 -- common/autotest_common.sh@10 -- # set +x 00:09:01.019 [2024-07-14 22:16:59.918877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:01.019 [2024-07-14 22:16:59.919035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105612 ] 00:09:01.583 [2024-07-14 22:17:00.371327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.583 [2024-07-14 22:17:00.514323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.583 [2024-07-14 22:17:00.514552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.147 [2024-07-14 22:17:01.070797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:02.147 [2024-07-14 22:17:01.070914] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:02.147 [2024-07-14 22:17:01.078765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:02.147 [2024-07-14 22:17:01.078834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:02.147 [2024-07-14 22:17:01.086790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:02.147 [2024-07-14 22:17:01.086868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:02.147 [2024-07-14 22:17:01.086899] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:02.147 [2024-07-14 22:17:01.176710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:02.147 [2024-07-14 22:17:01.176795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.147 [2024-07-14 22:17:01.176830] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:02.147 [2024-07-14 22:17:01.176855] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.147 [2024-07-14 22:17:01.177338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.147 [2024-07-14 22:17:01.177400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:02.712 22:17:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:02.712 22:17:01 -- common/autotest_common.sh@852 -- # return 0 00:09:02.712 22:17:01 -- json_config/json_config.sh@115 -- # echo '' 00:09:02.712 00:09:02.712 22:17:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:02.712 INFO: Checking if target configuration is the same... 00:09:02.712 22:17:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:02.712 22:17:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:02.712 22:17:01 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:02.712 22:17:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:02.712 + '[' 2 -ne 2 ']' 00:09:02.712 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:02.712 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:02.712 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:02.712 +++ basename /dev/fd/62 00:09:02.712 ++ mktemp /tmp/62.XXX 00:09:02.712 + tmp_file_1=/tmp/62.kKz 00:09:02.712 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:02.712 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:02.712 + tmp_file_2=/tmp/spdk_tgt_config.json.tbe 00:09:02.712 + ret=0 00:09:02.712 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:02.970 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:02.970 + diff -u /tmp/62.kKz /tmp/spdk_tgt_config.json.tbe 00:09:02.970 INFO: JSON config files are the same 00:09:02.971 + echo 'INFO: JSON config files are the same' 00:09:02.971 + rm /tmp/62.kKz /tmp/spdk_tgt_config.json.tbe 00:09:02.971 + exit 0 00:09:02.971 22:17:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:02.971 INFO: changing configuration and checking if this can be detected... 00:09:02.971 22:17:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:02.971 22:17:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:02.971 22:17:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.229 22:17:02 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:03.229 22:17:02 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:03.229 22:17:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.229 + '[' 2 -ne 2 ']' 00:09:03.229 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:03.229 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:03.229 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:03.229 +++ basename /dev/fd/62 00:09:03.229 ++ mktemp /tmp/62.XXX 00:09:03.229 + tmp_file_1=/tmp/62.ibv 00:09:03.229 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:03.229 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.229 + tmp_file_2=/tmp/spdk_tgt_config.json.dl7 00:09:03.229 + ret=0 00:09:03.229 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:03.488 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:03.488 + diff -u /tmp/62.ibv /tmp/spdk_tgt_config.json.dl7 00:09:03.488 + ret=1 00:09:03.488 + echo '=== Start of file: /tmp/62.ibv ===' 00:09:03.488 + cat /tmp/62.ibv 00:09:03.488 + echo '=== End of file: /tmp/62.ibv ===' 00:09:03.488 + echo '' 00:09:03.488 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dl7 ===' 00:09:03.488 + cat /tmp/spdk_tgt_config.json.dl7 00:09:03.488 + echo '=== End of file: /tmp/spdk_tgt_config.json.dl7 ===' 00:09:03.488 + echo '' 00:09:03.488 + rm /tmp/62.ibv /tmp/spdk_tgt_config.json.dl7 00:09:03.488 + exit 1 00:09:03.488 INFO: configuration change detected. 00:09:03.488 22:17:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:03.488 22:17:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:03.488 22:17:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:03.488 22:17:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:03.488 22:17:02 -- common/autotest_common.sh@10 -- # set +x 00:09:03.745 22:17:02 -- json_config/json_config.sh@360 -- # local ret=0 00:09:03.745 22:17:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:03.745 22:17:02 -- json_config/json_config.sh@370 -- # [[ -n 105612 ]] 00:09:03.745 22:17:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:03.745 22:17:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:03.745 22:17:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:03.745 22:17:02 -- common/autotest_common.sh@10 -- # set +x 00:09:03.745 22:17:02 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:09:03.745 22:17:02 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:03.745 22:17:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:03.745 22:17:02 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:03.745 22:17:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:04.003 22:17:02 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:04.003 22:17:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:04.262 22:17:03 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:04.262 22:17:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:04.262 22:17:03 -- json_config/json_config.sh@246 -- # uname -s 00:09:04.262 22:17:03 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:04.262 22:17:03 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:04.262 22:17:03 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:04.262 22:17:03 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:04.262 22:17:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:04.262 22:17:03 -- common/autotest_common.sh@10 -- # set +x 00:09:04.262 22:17:03 -- json_config/json_config.sh@376 -- # killprocess 105612 00:09:04.262 22:17:03 -- common/autotest_common.sh@926 -- # '[' -z 105612 ']' 00:09:04.262 22:17:03 -- common/autotest_common.sh@930 -- # kill -0 105612 00:09:04.520 22:17:03 -- common/autotest_common.sh@931 -- # uname 00:09:04.520 22:17:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:04.520 22:17:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105612 00:09:04.520 22:17:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:04.520 killing process with pid 105612 00:09:04.520 22:17:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:04.520 22:17:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105612' 00:09:04.520 22:17:03 -- common/autotest_common.sh@945 -- # kill 105612 00:09:04.520 22:17:03 -- common/autotest_common.sh@950 -- # wait 105612 00:09:05.454 22:17:04 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:05.454 22:17:04 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:05.454 22:17:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:05.454 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:09:05.454 22:17:04 -- json_config/json_config.sh@381 -- # return 0 00:09:05.454 INFO: Success 00:09:05.454 22:17:04 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:05.454 00:09:05.454 real 0m12.849s 00:09:05.454 user 0m18.490s 00:09:05.454 sys 0m2.132s 00:09:05.454 22:17:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.454 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:09:05.454 ************************************ 00:09:05.454 END TEST json_config 00:09:05.454 ************************************ 00:09:05.454 22:17:04 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:05.454 22:17:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:05.454 22:17:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.454 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:09:05.454 ************************************ 00:09:05.454 START TEST json_config_extra_key 00:09:05.454 ************************************ 00:09:05.454 22:17:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.454 22:17:04 -- nvmf/common.sh@7 -- # uname -s 00:09:05.454 22:17:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.454 22:17:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.454 22:17:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.454 22:17:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.454 22:17:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.454 22:17:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.454 22:17:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.454 22:17:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.454 22:17:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.454 22:17:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.454 22:17:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c1b2236-0685-4719-8f0d-b656fc5efd88 00:09:05.454 22:17:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c1b2236-0685-4719-8f0d-b656fc5efd88 00:09:05.454 22:17:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.454 22:17:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.454 22:17:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:05.454 22:17:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.454 22:17:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.454 22:17:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.454 22:17:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.454 22:17:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:05.454 22:17:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:05.454 22:17:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:05.454 22:17:04 -- paths/export.sh@5 -- # export PATH 00:09:05.454 22:17:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:05.454 22:17:04 -- nvmf/common.sh@46 -- # : 0 00:09:05.454 22:17:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:05.454 22:17:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:05.454 22:17:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:05.454 22:17:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.454 22:17:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.454 22:17:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:05.454 22:17:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:05.454 22:17:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:05.454 INFO: launching applications... 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=105815 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:05.454 Waiting for target to run... 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:05.454 22:17:04 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 105815 /var/tmp/spdk_tgt.sock 00:09:05.454 22:17:04 -- common/autotest_common.sh@819 -- # '[' -z 105815 ']' 00:09:05.454 22:17:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:05.454 22:17:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:05.454 22:17:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:05.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:05.454 22:17:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:05.454 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:09:05.454 [2024-07-14 22:17:04.376427] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:05.454 [2024-07-14 22:17:04.377194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105815 ] 00:09:06.021 [2024-07-14 22:17:04.832519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.021 [2024-07-14 22:17:04.976806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:06.021 [2024-07-14 22:17:04.977040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.957 00:09:06.957 INFO: shutting down applications... 00:09:06.957 22:17:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:06.957 22:17:05 -- common/autotest_common.sh@852 -- # return 0 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 105815 ]] 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 105815 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105815 00:09:06.957 22:17:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:07.524 22:17:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:07.524 22:17:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:07.524 22:17:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105815 00:09:07.524 22:17:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:08.091 22:17:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:08.091 22:17:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:08.091 22:17:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105815 00:09:08.091 22:17:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:08.658 22:17:07 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:08.658 22:17:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:08.658 22:17:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105815 00:09:08.658 22:17:07 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:09.225 22:17:07 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:09.225 22:17:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:09.225 22:17:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105815 00:09:09.225 22:17:07 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:09.484 SPDK target shutdown done 00:09:09.484 Success 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105815 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:09.484 22:17:08 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:09.484 00:09:09.484 real 0m4.270s 00:09:09.484 user 0m3.821s 00:09:09.484 sys 0m0.535s 00:09:09.484 22:17:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.484 22:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 ************************************ 00:09:09.484 END TEST json_config_extra_key 00:09:09.484 ************************************ 00:09:09.484 22:17:08 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:09.484 22:17:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.484 22:17:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.484 22:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 ************************************ 00:09:09.743 START TEST alias_rpc 00:09:09.743 ************************************ 00:09:09.743 22:17:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:09.743 * Looking for test storage... 00:09:09.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:09.743 22:17:08 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:09.743 22:17:08 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=105924 00:09:09.743 22:17:08 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 105924 00:09:09.743 22:17:08 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.743 22:17:08 -- common/autotest_common.sh@819 -- # '[' -z 105924 ']' 00:09:09.743 22:17:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.743 22:17:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:09.743 22:17:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.743 22:17:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:09.743 22:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 [2024-07-14 22:17:08.695854] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:09.743 [2024-07-14 22:17:08.696852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105924 ] 00:09:10.013 [2024-07-14 22:17:08.864248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.013 [2024-07-14 22:17:09.019587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.013 [2024-07-14 22:17:09.019863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.394 22:17:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:11.394 22:17:10 -- common/autotest_common.sh@852 -- # return 0 00:09:11.394 22:17:10 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:11.653 22:17:10 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 105924 00:09:11.653 22:17:10 -- common/autotest_common.sh@926 -- # '[' -z 105924 ']' 00:09:11.653 22:17:10 -- common/autotest_common.sh@930 -- # kill -0 105924 00:09:11.653 22:17:10 -- common/autotest_common.sh@931 -- # uname 00:09:11.653 22:17:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:11.653 22:17:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105924 00:09:11.653 killing process with pid 105924 00:09:11.653 22:17:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:11.653 22:17:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:11.653 22:17:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105924' 00:09:11.653 22:17:10 -- common/autotest_common.sh@945 -- # kill 105924 00:09:11.653 22:17:10 -- common/autotest_common.sh@950 -- # wait 105924 00:09:13.554 00:09:13.554 real 0m3.728s 00:09:13.554 user 0m3.957s 00:09:13.554 sys 0m0.508s 00:09:13.554 ************************************ 00:09:13.554 END TEST alias_rpc 00:09:13.554 ************************************ 00:09:13.554 22:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.554 22:17:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.554 22:17:12 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:09:13.554 22:17:12 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:13.554 22:17:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.554 22:17:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.554 22:17:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.554 ************************************ 00:09:13.554 START TEST spdkcli_tcp 00:09:13.554 ************************************ 00:09:13.554 22:17:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:13.554 * Looking for test storage... 00:09:13.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:13.554 22:17:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:13.554 22:17:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:13.554 22:17:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:13.554 22:17:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=106028 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@27 -- # waitforlisten 106028 00:09:13.554 22:17:12 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:13.554 22:17:12 -- common/autotest_common.sh@819 -- # '[' -z 106028 ']' 00:09:13.554 22:17:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.554 22:17:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.554 22:17:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.554 22:17:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.554 22:17:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.554 [2024-07-14 22:17:12.494330] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:13.554 [2024-07-14 22:17:12.494546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106028 ] 00:09:13.813 [2024-07-14 22:17:12.669316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:13.813 [2024-07-14 22:17:12.863183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.813 [2024-07-14 22:17:12.863860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.813 [2024-07-14 22:17:12.863876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.186 22:17:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:15.186 22:17:14 -- common/autotest_common.sh@852 -- # return 0 00:09:15.186 22:17:14 -- spdkcli/tcp.sh@31 -- # socat_pid=106072 00:09:15.186 22:17:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:15.186 22:17:14 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:15.444 [ 00:09:15.444 "spdk_get_version", 00:09:15.444 "rpc_get_methods", 00:09:15.444 "trace_get_info", 00:09:15.444 "trace_get_tpoint_group_mask", 00:09:15.444 "trace_disable_tpoint_group", 00:09:15.444 "trace_enable_tpoint_group", 00:09:15.444 "trace_clear_tpoint_mask", 00:09:15.444 "trace_set_tpoint_mask", 00:09:15.444 "framework_get_pci_devices", 00:09:15.444 "framework_get_config", 00:09:15.444 "framework_get_subsystems", 00:09:15.444 "iobuf_get_stats", 00:09:15.444 "iobuf_set_options", 00:09:15.444 "sock_set_default_impl", 00:09:15.444 "sock_impl_set_options", 00:09:15.444 "sock_impl_get_options", 00:09:15.444 "vmd_rescan", 00:09:15.444 "vmd_remove_device", 00:09:15.444 "vmd_enable", 00:09:15.444 "accel_get_stats", 00:09:15.444 "accel_set_options", 00:09:15.444 "accel_set_driver", 00:09:15.444 "accel_crypto_key_destroy", 00:09:15.444 "accel_crypto_keys_get", 00:09:15.444 "accel_crypto_key_create", 00:09:15.444 "accel_assign_opc", 00:09:15.444 "accel_get_module_info", 00:09:15.444 "accel_get_opc_assignments", 00:09:15.444 "notify_get_notifications", 00:09:15.444 "notify_get_types", 00:09:15.444 "bdev_get_histogram", 00:09:15.444 "bdev_enable_histogram", 00:09:15.444 "bdev_set_qos_limit", 00:09:15.444 "bdev_set_qd_sampling_period", 00:09:15.444 "bdev_get_bdevs", 00:09:15.444 "bdev_reset_iostat", 00:09:15.444 "bdev_get_iostat", 00:09:15.444 "bdev_examine", 00:09:15.444 "bdev_wait_for_examine", 00:09:15.444 "bdev_set_options", 00:09:15.444 "scsi_get_devices", 00:09:15.444 "thread_set_cpumask", 00:09:15.444 "framework_get_scheduler", 00:09:15.444 "framework_set_scheduler", 00:09:15.444 "framework_get_reactors", 00:09:15.444 "thread_get_io_channels", 00:09:15.444 "thread_get_pollers", 00:09:15.444 "thread_get_stats", 00:09:15.444 "framework_monitor_context_switch", 00:09:15.444 "spdk_kill_instance", 00:09:15.444 "log_enable_timestamps", 00:09:15.444 "log_get_flags", 00:09:15.444 "log_clear_flag", 00:09:15.444 "log_set_flag", 00:09:15.444 "log_get_level", 00:09:15.444 "log_set_level", 00:09:15.444 "log_get_print_level", 00:09:15.444 "log_set_print_level", 00:09:15.444 "framework_enable_cpumask_locks", 00:09:15.444 "framework_disable_cpumask_locks", 00:09:15.444 "framework_wait_init", 00:09:15.444 "framework_start_init", 00:09:15.444 "virtio_blk_create_transport", 00:09:15.444 "virtio_blk_get_transports", 00:09:15.444 "vhost_controller_set_coalescing", 00:09:15.444 "vhost_get_controllers", 00:09:15.444 "vhost_delete_controller", 00:09:15.444 "vhost_create_blk_controller", 00:09:15.444 "vhost_scsi_controller_remove_target", 00:09:15.444 "vhost_scsi_controller_add_target", 00:09:15.444 "vhost_start_scsi_controller", 00:09:15.444 "vhost_create_scsi_controller", 00:09:15.444 "nbd_get_disks", 00:09:15.444 "nbd_stop_disk", 00:09:15.444 "nbd_start_disk", 00:09:15.444 "env_dpdk_get_mem_stats", 00:09:15.444 "nvmf_subsystem_get_listeners", 00:09:15.444 "nvmf_subsystem_get_qpairs", 00:09:15.444 "nvmf_subsystem_get_controllers", 00:09:15.444 "nvmf_get_stats", 00:09:15.444 "nvmf_get_transports", 00:09:15.444 "nvmf_create_transport", 00:09:15.444 "nvmf_get_targets", 00:09:15.444 "nvmf_delete_target", 00:09:15.444 "nvmf_create_target", 00:09:15.444 "nvmf_subsystem_allow_any_host", 00:09:15.444 "nvmf_subsystem_remove_host", 00:09:15.444 "nvmf_subsystem_add_host", 00:09:15.444 "nvmf_subsystem_remove_ns", 00:09:15.444 "nvmf_subsystem_add_ns", 00:09:15.444 "nvmf_subsystem_listener_set_ana_state", 00:09:15.444 "nvmf_discovery_get_referrals", 00:09:15.444 "nvmf_discovery_remove_referral", 00:09:15.444 "nvmf_discovery_add_referral", 00:09:15.444 "nvmf_subsystem_remove_listener", 00:09:15.444 "nvmf_subsystem_add_listener", 00:09:15.444 "nvmf_delete_subsystem", 00:09:15.444 "nvmf_create_subsystem", 00:09:15.444 "nvmf_get_subsystems", 00:09:15.444 "nvmf_set_crdt", 00:09:15.444 "nvmf_set_config", 00:09:15.444 "nvmf_set_max_subsystems", 00:09:15.444 "iscsi_set_options", 00:09:15.444 "iscsi_get_auth_groups", 00:09:15.444 "iscsi_auth_group_remove_secret", 00:09:15.444 "iscsi_auth_group_add_secret", 00:09:15.444 "iscsi_delete_auth_group", 00:09:15.444 "iscsi_create_auth_group", 00:09:15.444 "iscsi_set_discovery_auth", 00:09:15.444 "iscsi_get_options", 00:09:15.444 "iscsi_target_node_request_logout", 00:09:15.444 "iscsi_target_node_set_redirect", 00:09:15.444 "iscsi_target_node_set_auth", 00:09:15.444 "iscsi_target_node_add_lun", 00:09:15.444 "iscsi_get_connections", 00:09:15.444 "iscsi_portal_group_set_auth", 00:09:15.444 "iscsi_start_portal_group", 00:09:15.444 "iscsi_delete_portal_group", 00:09:15.444 "iscsi_create_portal_group", 00:09:15.444 "iscsi_get_portal_groups", 00:09:15.444 "iscsi_delete_target_node", 00:09:15.444 "iscsi_target_node_remove_pg_ig_maps", 00:09:15.444 "iscsi_target_node_add_pg_ig_maps", 00:09:15.444 "iscsi_create_target_node", 00:09:15.444 "iscsi_get_target_nodes", 00:09:15.444 "iscsi_delete_initiator_group", 00:09:15.444 "iscsi_initiator_group_remove_initiators", 00:09:15.444 "iscsi_initiator_group_add_initiators", 00:09:15.444 "iscsi_create_initiator_group", 00:09:15.444 "iscsi_get_initiator_groups", 00:09:15.444 "iaa_scan_accel_module", 00:09:15.444 "dsa_scan_accel_module", 00:09:15.444 "ioat_scan_accel_module", 00:09:15.444 "accel_error_inject_error", 00:09:15.444 "bdev_iscsi_delete", 00:09:15.444 "bdev_iscsi_create", 00:09:15.444 "bdev_iscsi_set_options", 00:09:15.444 "bdev_virtio_attach_controller", 00:09:15.444 "bdev_virtio_scsi_get_devices", 00:09:15.444 "bdev_virtio_detach_controller", 00:09:15.444 "bdev_virtio_blk_set_hotplug", 00:09:15.444 "bdev_ftl_set_property", 00:09:15.444 "bdev_ftl_get_properties", 00:09:15.444 "bdev_ftl_get_stats", 00:09:15.444 "bdev_ftl_unmap", 00:09:15.444 "bdev_ftl_unload", 00:09:15.444 "bdev_ftl_delete", 00:09:15.444 "bdev_ftl_load", 00:09:15.444 "bdev_ftl_create", 00:09:15.444 "bdev_aio_delete", 00:09:15.444 "bdev_aio_rescan", 00:09:15.444 "bdev_aio_create", 00:09:15.444 "blobfs_create", 00:09:15.444 "blobfs_detect", 00:09:15.444 "blobfs_set_cache_size", 00:09:15.444 "bdev_zone_block_delete", 00:09:15.444 "bdev_zone_block_create", 00:09:15.444 "bdev_delay_delete", 00:09:15.444 "bdev_delay_create", 00:09:15.444 "bdev_delay_update_latency", 00:09:15.444 "bdev_split_delete", 00:09:15.444 "bdev_split_create", 00:09:15.444 "bdev_error_inject_error", 00:09:15.444 "bdev_error_delete", 00:09:15.444 "bdev_error_create", 00:09:15.444 "bdev_raid_set_options", 00:09:15.444 "bdev_raid_remove_base_bdev", 00:09:15.444 "bdev_raid_add_base_bdev", 00:09:15.444 "bdev_raid_delete", 00:09:15.444 "bdev_raid_create", 00:09:15.444 "bdev_raid_get_bdevs", 00:09:15.444 "bdev_lvol_grow_lvstore", 00:09:15.444 "bdev_lvol_get_lvols", 00:09:15.444 "bdev_lvol_get_lvstores", 00:09:15.445 "bdev_lvol_delete", 00:09:15.445 "bdev_lvol_set_read_only", 00:09:15.445 "bdev_lvol_resize", 00:09:15.445 "bdev_lvol_decouple_parent", 00:09:15.445 "bdev_lvol_inflate", 00:09:15.445 "bdev_lvol_rename", 00:09:15.445 "bdev_lvol_clone_bdev", 00:09:15.445 "bdev_lvol_clone", 00:09:15.445 "bdev_lvol_snapshot", 00:09:15.445 "bdev_lvol_create", 00:09:15.445 "bdev_lvol_delete_lvstore", 00:09:15.445 "bdev_lvol_rename_lvstore", 00:09:15.445 "bdev_lvol_create_lvstore", 00:09:15.445 "bdev_passthru_delete", 00:09:15.445 "bdev_passthru_create", 00:09:15.445 "bdev_nvme_cuse_unregister", 00:09:15.445 "bdev_nvme_cuse_register", 00:09:15.445 "bdev_opal_new_user", 00:09:15.445 "bdev_opal_set_lock_state", 00:09:15.445 "bdev_opal_delete", 00:09:15.445 "bdev_opal_get_info", 00:09:15.445 "bdev_opal_create", 00:09:15.445 "bdev_nvme_opal_revert", 00:09:15.445 "bdev_nvme_opal_init", 00:09:15.445 "bdev_nvme_send_cmd", 00:09:15.445 "bdev_nvme_get_path_iostat", 00:09:15.445 "bdev_nvme_get_mdns_discovery_info", 00:09:15.445 "bdev_nvme_stop_mdns_discovery", 00:09:15.445 "bdev_nvme_start_mdns_discovery", 00:09:15.445 "bdev_nvme_set_multipath_policy", 00:09:15.445 "bdev_nvme_set_preferred_path", 00:09:15.445 "bdev_nvme_get_io_paths", 00:09:15.445 "bdev_nvme_remove_error_injection", 00:09:15.445 "bdev_nvme_add_error_injection", 00:09:15.445 "bdev_nvme_get_discovery_info", 00:09:15.445 "bdev_nvme_stop_discovery", 00:09:15.445 "bdev_nvme_start_discovery", 00:09:15.445 "bdev_nvme_get_controller_health_info", 00:09:15.445 "bdev_nvme_disable_controller", 00:09:15.445 "bdev_nvme_enable_controller", 00:09:15.445 "bdev_nvme_reset_controller", 00:09:15.445 "bdev_nvme_get_transport_statistics", 00:09:15.445 "bdev_nvme_apply_firmware", 00:09:15.445 "bdev_nvme_detach_controller", 00:09:15.445 "bdev_nvme_get_controllers", 00:09:15.445 "bdev_nvme_attach_controller", 00:09:15.445 "bdev_nvme_set_hotplug", 00:09:15.445 "bdev_nvme_set_options", 00:09:15.445 "bdev_null_resize", 00:09:15.445 "bdev_null_delete", 00:09:15.445 "bdev_null_create", 00:09:15.445 "bdev_malloc_delete", 00:09:15.445 "bdev_malloc_create" 00:09:15.445 ] 00:09:15.445 22:17:14 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:15.445 22:17:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:15.445 22:17:14 -- common/autotest_common.sh@10 -- # set +x 00:09:15.445 22:17:14 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:15.445 22:17:14 -- spdkcli/tcp.sh@38 -- # killprocess 106028 00:09:15.445 22:17:14 -- common/autotest_common.sh@926 -- # '[' -z 106028 ']' 00:09:15.445 22:17:14 -- common/autotest_common.sh@930 -- # kill -0 106028 00:09:15.445 22:17:14 -- common/autotest_common.sh@931 -- # uname 00:09:15.445 22:17:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:15.445 22:17:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106028 00:09:15.445 22:17:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:15.445 killing process with pid 106028 00:09:15.445 22:17:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:15.445 22:17:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106028' 00:09:15.445 22:17:14 -- common/autotest_common.sh@945 -- # kill 106028 00:09:15.445 22:17:14 -- common/autotest_common.sh@950 -- # wait 106028 00:09:17.346 ************************************ 00:09:17.346 END TEST spdkcli_tcp 00:09:17.346 ************************************ 00:09:17.346 00:09:17.346 real 0m3.998s 00:09:17.346 user 0m7.436s 00:09:17.346 sys 0m0.619s 00:09:17.346 22:17:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.346 22:17:16 -- common/autotest_common.sh@10 -- # set +x 00:09:17.346 22:17:16 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:17.346 22:17:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:17.346 22:17:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.346 22:17:16 -- common/autotest_common.sh@10 -- # set +x 00:09:17.346 ************************************ 00:09:17.346 START TEST dpdk_mem_utility 00:09:17.346 ************************************ 00:09:17.346 22:17:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:17.604 * Looking for test storage... 00:09:17.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:17.604 22:17:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:17.604 22:17:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=106169 00:09:17.604 22:17:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 106169 00:09:17.604 22:17:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.604 22:17:16 -- common/autotest_common.sh@819 -- # '[' -z 106169 ']' 00:09:17.604 22:17:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.604 22:17:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:17.604 22:17:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.604 22:17:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:17.604 22:17:16 -- common/autotest_common.sh@10 -- # set +x 00:09:17.604 [2024-07-14 22:17:16.537754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:17.604 [2024-07-14 22:17:16.538533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106169 ] 00:09:17.863 [2024-07-14 22:17:16.697182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.863 [2024-07-14 22:17:16.862258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:17.863 [2024-07-14 22:17:16.862487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.239 22:17:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:19.239 22:17:18 -- common/autotest_common.sh@852 -- # return 0 00:09:19.239 22:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:19.239 22:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:19.239 22:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.239 22:17:18 -- common/autotest_common.sh@10 -- # set +x 00:09:19.239 { 00:09:19.239 "filename": "/tmp/spdk_mem_dump.txt" 00:09:19.239 } 00:09:19.239 22:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.239 22:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:19.239 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:19.239 1 heaps totaling size 820.000000 MiB 00:09:19.239 size: 820.000000 MiB heap id: 0 00:09:19.239 end heaps---------- 00:09:19.239 8 mempools totaling size 598.116089 MiB 00:09:19.239 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:19.239 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:19.239 size: 84.521057 MiB name: bdev_io_106169 00:09:19.239 size: 51.011292 MiB name: evtpool_106169 00:09:19.239 size: 50.003479 MiB name: msgpool_106169 00:09:19.239 size: 21.763794 MiB name: PDU_Pool 00:09:19.239 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:19.239 size: 0.026123 MiB name: Session_Pool 00:09:19.239 end mempools------- 00:09:19.239 6 memzones totaling size 4.142822 MiB 00:09:19.239 size: 1.000366 MiB name: RG_ring_0_106169 00:09:19.239 size: 1.000366 MiB name: RG_ring_1_106169 00:09:19.239 size: 1.000366 MiB name: RG_ring_4_106169 00:09:19.239 size: 1.000366 MiB name: RG_ring_5_106169 00:09:19.239 size: 0.125366 MiB name: RG_ring_2_106169 00:09:19.239 size: 0.015991 MiB name: RG_ring_3_106169 00:09:19.239 end memzones------- 00:09:19.239 22:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:19.239 heap id: 0 total size: 820.000000 MiB number of busy elements: 224 number of free elements: 18 00:09:19.239 list of free elements. size: 18.470215 MiB 00:09:19.239 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:19.239 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:19.239 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:19.239 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:19.239 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:19.239 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:19.239 element at address: 0x200019600000 with size: 0.999329 MiB 00:09:19.239 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:19.239 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:19.239 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:19.240 element at address: 0x200019900040 with size: 0.937256 MiB 00:09:19.240 element at address: 0x200000200000 with size: 0.835083 MiB 00:09:19.240 element at address: 0x20001b000000 with size: 0.560730 MiB 00:09:19.240 element at address: 0x200019200000 with size: 0.489197 MiB 00:09:19.240 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:19.240 element at address: 0x200013800000 with size: 0.469116 MiB 00:09:19.240 element at address: 0x200028400000 with size: 0.399719 MiB 00:09:19.240 element at address: 0x200003a00000 with size: 0.356140 MiB 00:09:19.240 list of standard malloc elements. size: 199.265381 MiB 00:09:19.240 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:19.240 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:19.240 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:19.240 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:19.240 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:19.240 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:19.240 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:19.240 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:19.240 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:09:19.240 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:09:19.240 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:19.240 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:19.240 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08f8c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:19.240 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:19.241 element at address: 0x200028466540 with size: 0.000244 MiB 00:09:19.241 element at address: 0x200028466640 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846d300 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846d580 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:19.241 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:19.241 list of memzone associated elements. size: 602.264404 MiB 00:09:19.241 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:19.241 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:19.241 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:19.241 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:19.241 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:19.241 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_106169_0 00:09:19.241 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:19.241 associated memzone info: size: 48.002930 MiB name: MP_evtpool_106169_0 00:09:19.241 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:19.241 associated memzone info: size: 48.002930 MiB name: MP_msgpool_106169_0 00:09:19.241 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:19.241 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:19.241 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:19.241 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:19.241 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:19.241 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_106169 00:09:19.241 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:19.241 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_106169 00:09:19.241 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:19.241 associated memzone info: size: 1.007996 MiB name: MP_evtpool_106169 00:09:19.241 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:19.241 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:19.241 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:19.241 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:19.241 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:19.241 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:19.241 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:19.241 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:19.241 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:19.241 associated memzone info: size: 1.000366 MiB name: RG_ring_0_106169 00:09:19.241 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:19.241 associated memzone info: size: 1.000366 MiB name: RG_ring_1_106169 00:09:19.241 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:19.241 associated memzone info: size: 1.000366 MiB name: RG_ring_4_106169 00:09:19.241 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:19.241 associated memzone info: size: 1.000366 MiB name: RG_ring_5_106169 00:09:19.241 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:19.241 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_106169 00:09:19.241 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:19.241 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:19.241 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:19.241 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:19.241 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:19.241 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:19.241 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:19.241 associated memzone info: size: 0.125366 MiB name: RG_ring_2_106169 00:09:19.241 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:19.241 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:19.241 element at address: 0x200028466740 with size: 0.023804 MiB 00:09:19.241 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:19.241 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:19.241 associated memzone info: size: 0.015991 MiB name: RG_ring_3_106169 00:09:19.241 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:09:19.241 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:19.241 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:09:19.241 associated memzone info: size: 0.000183 MiB name: MP_msgpool_106169 00:09:19.241 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:19.241 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_106169 00:09:19.241 element at address: 0x20002846d400 with size: 0.000366 MiB 00:09:19.241 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:19.241 22:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:19.241 22:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 106169 00:09:19.241 22:17:18 -- common/autotest_common.sh@926 -- # '[' -z 106169 ']' 00:09:19.241 22:17:18 -- common/autotest_common.sh@930 -- # kill -0 106169 00:09:19.241 22:17:18 -- common/autotest_common.sh@931 -- # uname 00:09:19.241 22:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:19.241 22:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106169 00:09:19.241 killing process with pid 106169 00:09:19.241 22:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:19.241 22:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:19.241 22:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106169' 00:09:19.241 22:17:18 -- common/autotest_common.sh@945 -- # kill 106169 00:09:19.241 22:17:18 -- common/autotest_common.sh@950 -- # wait 106169 00:09:21.146 00:09:21.146 real 0m3.675s 00:09:21.146 user 0m3.773s 00:09:21.146 sys 0m0.554s 00:09:21.146 ************************************ 00:09:21.146 END TEST dpdk_mem_utility 00:09:21.146 ************************************ 00:09:21.146 22:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.146 22:17:20 -- common/autotest_common.sh@10 -- # set +x 00:09:21.146 22:17:20 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:21.146 22:17:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:21.146 22:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.146 22:17:20 -- common/autotest_common.sh@10 -- # set +x 00:09:21.146 ************************************ 00:09:21.146 START TEST event 00:09:21.146 ************************************ 00:09:21.146 22:17:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:21.146 * Looking for test storage... 00:09:21.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:21.146 22:17:20 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:21.146 22:17:20 -- bdev/nbd_common.sh@6 -- # set -e 00:09:21.146 22:17:20 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:21.146 22:17:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:21.146 22:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.146 22:17:20 -- common/autotest_common.sh@10 -- # set +x 00:09:21.146 ************************************ 00:09:21.146 START TEST event_perf 00:09:21.146 ************************************ 00:09:21.146 22:17:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:21.404 Running I/O for 1 seconds...[2024-07-14 22:17:20.250010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:21.404 [2024-07-14 22:17:20.250224] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106271 ] 00:09:21.404 [2024-07-14 22:17:20.439030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.662 Running I/O for 1 seconds...[2024-07-14 22:17:20.623652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.662 [2024-07-14 22:17:20.623880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.662 [2024-07-14 22:17:20.623827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.662 [2024-07-14 22:17:20.623886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.035 00:09:23.035 lcore 0: 111174 00:09:23.035 lcore 1: 111174 00:09:23.035 lcore 2: 111174 00:09:23.035 lcore 3: 111174 00:09:23.035 done. 00:09:23.035 ************************************ 00:09:23.035 END TEST event_perf 00:09:23.035 ************************************ 00:09:23.035 00:09:23.035 real 0m1.777s 00:09:23.035 user 0m4.530s 00:09:23.035 sys 0m0.132s 00:09:23.035 22:17:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.035 22:17:21 -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 22:17:22 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:23.035 22:17:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:23.035 22:17:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:23.035 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 ************************************ 00:09:23.035 START TEST event_reactor 00:09:23.035 ************************************ 00:09:23.035 22:17:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:23.035 [2024-07-14 22:17:22.077488] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:23.035 [2024-07-14 22:17:22.078438] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106324 ] 00:09:23.293 [2024-07-14 22:17:22.246551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.550 [2024-07-14 22:17:22.425598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.926 test_start 00:09:24.926 oneshot 00:09:24.926 tick 100 00:09:24.926 tick 100 00:09:24.926 tick 250 00:09:24.926 tick 100 00:09:24.926 tick 100 00:09:24.926 tick 100 00:09:24.926 tick 250 00:09:24.926 tick 500 00:09:24.926 tick 100 00:09:24.926 tick 100 00:09:24.926 tick 250 00:09:24.926 tick 100 00:09:24.926 tick 100 00:09:24.926 test_end 00:09:24.926 00:09:24.926 real 0m1.726s 00:09:24.926 user 0m1.496s 00:09:24.926 sys 0m0.129s 00:09:24.926 22:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.926 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 ************************************ 00:09:24.926 END TEST event_reactor 00:09:24.926 ************************************ 00:09:24.926 22:17:23 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:24.926 22:17:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:24.926 22:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.926 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 ************************************ 00:09:24.926 START TEST event_reactor_perf 00:09:24.926 ************************************ 00:09:24.926 22:17:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:24.926 [2024-07-14 22:17:23.851887] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:24.926 [2024-07-14 22:17:23.852248] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106391 ] 00:09:25.185 [2024-07-14 22:17:24.018099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.185 [2024-07-14 22:17:24.193193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.560 test_start 00:09:26.560 test_end 00:09:26.560 Performance: 400503 events per second 00:09:26.560 ************************************ 00:09:26.560 END TEST event_reactor_perf 00:09:26.560 ************************************ 00:09:26.560 00:09:26.560 real 0m1.710s 00:09:26.560 user 0m1.474s 00:09:26.560 sys 0m0.134s 00:09:26.560 22:17:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.560 22:17:25 -- common/autotest_common.sh@10 -- # set +x 00:09:26.560 22:17:25 -- event/event.sh@49 -- # uname -s 00:09:26.560 22:17:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:26.560 22:17:25 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:26.560 22:17:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:26.560 22:17:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:26.560 22:17:25 -- common/autotest_common.sh@10 -- # set +x 00:09:26.560 ************************************ 00:09:26.560 START TEST event_scheduler 00:09:26.560 ************************************ 00:09:26.560 22:17:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:26.816 * Looking for test storage... 00:09:26.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:26.816 22:17:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:26.816 22:17:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=106462 00:09:26.816 22:17:25 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:26.816 22:17:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:26.816 22:17:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 106462 00:09:26.816 22:17:25 -- common/autotest_common.sh@819 -- # '[' -z 106462 ']' 00:09:26.816 22:17:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.817 22:17:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.817 22:17:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.817 22:17:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.817 22:17:25 -- common/autotest_common.sh@10 -- # set +x 00:09:26.817 [2024-07-14 22:17:25.745279] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:26.817 [2024-07-14 22:17:25.745772] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106462 ] 00:09:27.074 [2024-07-14 22:17:25.949040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.332 [2024-07-14 22:17:26.199685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.332 [2024-07-14 22:17:26.199827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.332 [2024-07-14 22:17:26.199977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.332 [2024-07-14 22:17:26.199985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.897 22:17:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.897 22:17:26 -- common/autotest_common.sh@852 -- # return 0 00:09:27.897 22:17:26 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:27.897 22:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.897 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 POWER: Env isn't set yet! 00:09:27.897 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:27.897 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.897 POWER: Cannot set governor of lcore 0 to userspace 00:09:27.897 POWER: Attempting to initialise PSTAT power management... 00:09:27.897 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.897 POWER: Cannot set governor of lcore 0 to performance 00:09:27.897 POWER: Attempting to initialise AMD PSTATE power management... 00:09:27.897 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.897 POWER: Cannot set governor of lcore 0 to userspace 00:09:27.897 POWER: Attempting to initialise CPPC power management... 00:09:27.897 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.897 POWER: Cannot set governor of lcore 0 to userspace 00:09:27.897 POWER: Attempting to initialise VM power management... 00:09:27.897 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:27.897 POWER: Unable to set Power Management Environment for lcore 0 00:09:27.897 [2024-07-14 22:17:26.677558] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:27.897 [2024-07-14 22:17:26.677823] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:27.897 [2024-07-14 22:17:26.677965] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:27.897 [2024-07-14 22:17:26.678144] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:27.897 [2024-07-14 22:17:26.678286] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:27.897 [2024-07-14 22:17:26.678443] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:27.897 22:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.897 22:17:26 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:27.897 22:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.897 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 [2024-07-14 22:17:26.956140] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:27.897 22:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.897 22:17:26 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:27.897 22:17:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:27.897 22:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.897 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 ************************************ 00:09:28.155 START TEST scheduler_create_thread 00:09:28.155 ************************************ 00:09:28.155 22:17:26 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:28.155 22:17:26 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:28.155 22:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 2 00:09:28.155 22:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:26 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:28.155 22:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 3 00:09:28.155 22:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:26 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:28.155 22:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 4 00:09:28.155 22:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:26 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:28.155 22:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 5 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 6 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 7 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 8 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 9 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 10 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.155 22:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:28.155 22:17:27 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:28.155 22:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.155 22:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:29.089 22:17:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:29.089 00:09:29.089 real 0m1.177s 00:09:29.089 user 0m0.009s 00:09:29.089 sys 0m0.002s 00:09:29.089 ************************************ 00:09:29.089 END TEST scheduler_create_thread 00:09:29.089 ************************************ 00:09:29.089 22:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.089 22:17:28 -- common/autotest_common.sh@10 -- # set +x 00:09:29.347 22:17:28 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:29.347 22:17:28 -- scheduler/scheduler.sh@46 -- # killprocess 106462 00:09:29.347 22:17:28 -- common/autotest_common.sh@926 -- # '[' -z 106462 ']' 00:09:29.347 22:17:28 -- common/autotest_common.sh@930 -- # kill -0 106462 00:09:29.347 22:17:28 -- common/autotest_common.sh@931 -- # uname 00:09:29.347 22:17:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:29.347 22:17:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106462 00:09:29.347 22:17:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:29.347 killing process with pid 106462 00:09:29.347 22:17:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:29.347 22:17:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106462' 00:09:29.347 22:17:28 -- common/autotest_common.sh@945 -- # kill 106462 00:09:29.347 22:17:28 -- common/autotest_common.sh@950 -- # wait 106462 00:09:29.606 [2024-07-14 22:17:28.627125] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:30.981 ************************************ 00:09:30.981 END TEST event_scheduler 00:09:30.981 ************************************ 00:09:30.981 00:09:30.981 real 0m4.132s 00:09:30.981 user 0m6.323s 00:09:30.981 sys 0m0.467s 00:09:30.981 22:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.981 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:09:30.981 22:17:29 -- event/event.sh@51 -- # modprobe -n nbd 00:09:30.981 22:17:29 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:30.981 22:17:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.982 22:17:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.982 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:09:30.982 ************************************ 00:09:30.982 START TEST app_repeat 00:09:30.982 ************************************ 00:09:30.982 22:17:29 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:30.982 22:17:29 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.982 22:17:29 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:09:30.982 22:17:29 -- event/event.sh@13 -- # local nbd_list 00:09:30.982 22:17:29 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:09:30.982 22:17:29 -- event/event.sh@14 -- # local bdev_list 00:09:30.982 22:17:29 -- event/event.sh@15 -- # local repeat_times=4 00:09:30.982 22:17:29 -- event/event.sh@17 -- # modprobe nbd 00:09:30.982 22:17:29 -- event/event.sh@19 -- # repeat_pid=106573 00:09:30.982 22:17:29 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:30.982 22:17:29 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:30.982 Process app_repeat pid: 106573 00:09:30.982 22:17:29 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106573' 00:09:30.982 22:17:29 -- event/event.sh@23 -- # for i in {0..2} 00:09:30.982 spdk_app_start Round 0 00:09:30.982 22:17:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:30.982 22:17:29 -- event/event.sh@25 -- # waitforlisten 106573 /var/tmp/spdk-nbd.sock 00:09:30.982 22:17:29 -- common/autotest_common.sh@819 -- # '[' -z 106573 ']' 00:09:30.982 22:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:30.982 22:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.982 22:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:30.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:30.982 22:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.982 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:09:30.982 [2024-07-14 22:17:29.840124] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:30.982 [2024-07-14 22:17:29.841358] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106573 ] 00:09:30.982 [2024-07-14 22:17:30.014964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.254 [2024-07-14 22:17:30.210138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.254 [2024-07-14 22:17:30.210145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.847 22:17:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.847 22:17:30 -- common/autotest_common.sh@852 -- # return 0 00:09:31.847 22:17:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.105 Malloc0 00:09:32.105 22:17:31 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.364 Malloc1 00:09:32.364 22:17:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@12 -- # local i 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.364 22:17:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:32.623 /dev/nbd0 00:09:32.623 22:17:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:32.623 22:17:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:32.623 22:17:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:32.623 22:17:31 -- common/autotest_common.sh@857 -- # local i 00:09:32.623 22:17:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:32.623 22:17:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:32.623 22:17:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:32.623 22:17:31 -- common/autotest_common.sh@861 -- # break 00:09:32.623 22:17:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:32.623 22:17:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:32.623 22:17:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:32.623 1+0 records in 00:09:32.623 1+0 records out 00:09:32.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446833 s, 9.2 MB/s 00:09:32.623 22:17:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:32.623 22:17:31 -- common/autotest_common.sh@874 -- # size=4096 00:09:32.623 22:17:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:32.623 22:17:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:32.623 22:17:31 -- common/autotest_common.sh@877 -- # return 0 00:09:32.623 22:17:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.623 22:17:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.623 22:17:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:32.882 /dev/nbd1 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:32.882 22:17:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:32.882 22:17:31 -- common/autotest_common.sh@857 -- # local i 00:09:32.882 22:17:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:32.882 22:17:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:32.882 22:17:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:32.882 22:17:31 -- common/autotest_common.sh@861 -- # break 00:09:32.882 22:17:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:32.882 22:17:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:32.882 22:17:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:32.882 1+0 records in 00:09:32.882 1+0 records out 00:09:32.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270488 s, 15.1 MB/s 00:09:32.882 22:17:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:32.882 22:17:31 -- common/autotest_common.sh@874 -- # size=4096 00:09:32.882 22:17:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:32.882 22:17:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:32.882 22:17:31 -- common/autotest_common.sh@877 -- # return 0 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.882 22:17:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:33.140 { 00:09:33.140 "nbd_device": "/dev/nbd0", 00:09:33.140 "bdev_name": "Malloc0" 00:09:33.140 }, 00:09:33.140 { 00:09:33.140 "nbd_device": "/dev/nbd1", 00:09:33.140 "bdev_name": "Malloc1" 00:09:33.140 } 00:09:33.140 ]' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:33.140 { 00:09:33.140 "nbd_device": "/dev/nbd0", 00:09:33.140 "bdev_name": "Malloc0" 00:09:33.140 }, 00:09:33.140 { 00:09:33.140 "nbd_device": "/dev/nbd1", 00:09:33.140 "bdev_name": "Malloc1" 00:09:33.140 } 00:09:33.140 ]' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:33.140 /dev/nbd1' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:33.140 /dev/nbd1' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@65 -- # count=2 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@95 -- # count=2 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:33.140 256+0 records in 00:09:33.140 256+0 records out 00:09:33.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00774488 s, 135 MB/s 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.140 22:17:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:33.398 256+0 records in 00:09:33.398 256+0 records out 00:09:33.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239716 s, 43.7 MB/s 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:33.398 256+0 records in 00:09:33.398 256+0 records out 00:09:33.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288539 s, 36.3 MB/s 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:33.398 22:17:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:33.399 22:17:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.399 22:17:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:33.399 22:17:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.399 22:17:32 -- bdev/nbd_common.sh@51 -- # local i 00:09:33.399 22:17:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.399 22:17:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.656 22:17:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.656 22:17:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.656 22:17:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@41 -- # break 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.657 22:17:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@41 -- # break 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.915 22:17:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@65 -- # true 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@104 -- # count=0 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:34.173 22:17:33 -- bdev/nbd_common.sh@109 -- # return 0 00:09:34.173 22:17:33 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:34.737 22:17:33 -- event/event.sh@35 -- # sleep 3 00:09:35.670 [2024-07-14 22:17:34.656147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.927 [2024-07-14 22:17:34.820598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.927 [2024-07-14 22:17:34.820603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.184 [2024-07-14 22:17:34.995543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:36.184 [2024-07-14 22:17:34.995721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.553 spdk_app_start Round 1 00:09:37.553 22:17:36 -- event/event.sh@23 -- # for i in {0..2} 00:09:37.553 22:17:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:37.553 22:17:36 -- event/event.sh@25 -- # waitforlisten 106573 /var/tmp/spdk-nbd.sock 00:09:37.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.553 22:17:36 -- common/autotest_common.sh@819 -- # '[' -z 106573 ']' 00:09:37.553 22:17:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.553 22:17:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:37.553 22:17:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.553 22:17:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:37.553 22:17:36 -- common/autotest_common.sh@10 -- # set +x 00:09:38.117 22:17:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.117 22:17:36 -- common/autotest_common.sh@852 -- # return 0 00:09:38.117 22:17:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.117 Malloc0 00:09:38.373 22:17:37 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.629 Malloc1 00:09:38.629 22:17:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:38.629 22:17:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@12 -- # local i 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.630 22:17:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:38.886 /dev/nbd0 00:09:38.886 22:17:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:38.886 22:17:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:38.886 22:17:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:38.886 22:17:37 -- common/autotest_common.sh@857 -- # local i 00:09:38.886 22:17:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:38.886 22:17:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:38.886 22:17:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:38.886 22:17:37 -- common/autotest_common.sh@861 -- # break 00:09:38.886 22:17:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:38.886 22:17:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:38.886 22:17:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:38.886 1+0 records in 00:09:38.886 1+0 records out 00:09:38.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271632 s, 15.1 MB/s 00:09:38.886 22:17:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.886 22:17:37 -- common/autotest_common.sh@874 -- # size=4096 00:09:38.886 22:17:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.886 22:17:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:38.886 22:17:37 -- common/autotest_common.sh@877 -- # return 0 00:09:38.886 22:17:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.886 22:17:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.886 22:17:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:39.143 /dev/nbd1 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:39.143 22:17:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:39.143 22:17:38 -- common/autotest_common.sh@857 -- # local i 00:09:39.143 22:17:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:39.143 22:17:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:39.143 22:17:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:39.143 22:17:38 -- common/autotest_common.sh@861 -- # break 00:09:39.143 22:17:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:39.143 22:17:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:39.143 22:17:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.143 1+0 records in 00:09:39.143 1+0 records out 00:09:39.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201295 s, 20.3 MB/s 00:09:39.143 22:17:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:39.143 22:17:38 -- common/autotest_common.sh@874 -- # size=4096 00:09:39.143 22:17:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:39.143 22:17:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:39.143 22:17:38 -- common/autotest_common.sh@877 -- # return 0 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.143 22:17:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:39.401 { 00:09:39.401 "nbd_device": "/dev/nbd0", 00:09:39.401 "bdev_name": "Malloc0" 00:09:39.401 }, 00:09:39.401 { 00:09:39.401 "nbd_device": "/dev/nbd1", 00:09:39.401 "bdev_name": "Malloc1" 00:09:39.401 } 00:09:39.401 ]' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:39.401 { 00:09:39.401 "nbd_device": "/dev/nbd0", 00:09:39.401 "bdev_name": "Malloc0" 00:09:39.401 }, 00:09:39.401 { 00:09:39.401 "nbd_device": "/dev/nbd1", 00:09:39.401 "bdev_name": "Malloc1" 00:09:39.401 } 00:09:39.401 ]' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:39.401 /dev/nbd1' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:39.401 /dev/nbd1' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@65 -- # count=2 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@95 -- # count=2 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:39.401 256+0 records in 00:09:39.401 256+0 records out 00:09:39.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00603082 s, 174 MB/s 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:39.401 256+0 records in 00:09:39.401 256+0 records out 00:09:39.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245741 s, 42.7 MB/s 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.401 22:17:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:39.657 256+0 records in 00:09:39.657 256+0 records out 00:09:39.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270727 s, 38.7 MB/s 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@51 -- # local i 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.657 22:17:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@41 -- # break 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:39.915 22:17:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@41 -- # break 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.172 22:17:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@65 -- # true 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@65 -- # count=0 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@104 -- # count=0 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:40.429 22:17:39 -- bdev/nbd_common.sh@109 -- # return 0 00:09:40.429 22:17:39 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:40.994 22:17:39 -- event/event.sh@35 -- # sleep 3 00:09:42.060 [2024-07-14 22:17:40.801284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.060 [2024-07-14 22:17:40.952161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.060 [2024-07-14 22:17:40.952173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.060 [2024-07-14 22:17:41.120197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:42.060 [2024-07-14 22:17:41.120288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:43.961 22:17:42 -- event/event.sh@23 -- # for i in {0..2} 00:09:43.962 22:17:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:43.962 spdk_app_start Round 2 00:09:43.962 22:17:42 -- event/event.sh@25 -- # waitforlisten 106573 /var/tmp/spdk-nbd.sock 00:09:43.962 22:17:42 -- common/autotest_common.sh@819 -- # '[' -z 106573 ']' 00:09:43.962 22:17:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.962 22:17:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:43.962 22:17:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.962 22:17:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:43.962 22:17:42 -- common/autotest_common.sh@10 -- # set +x 00:09:43.962 22:17:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:43.962 22:17:43 -- common/autotest_common.sh@852 -- # return 0 00:09:43.962 22:17:43 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:44.220 Malloc0 00:09:44.220 22:17:43 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:44.786 Malloc1 00:09:44.786 22:17:43 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@12 -- # local i 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:44.786 /dev/nbd0 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:44.786 22:17:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:44.786 22:17:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:44.786 22:17:43 -- common/autotest_common.sh@857 -- # local i 00:09:44.786 22:17:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:44.786 22:17:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:44.786 22:17:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:44.786 22:17:43 -- common/autotest_common.sh@861 -- # break 00:09:44.786 22:17:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:44.786 22:17:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:44.787 22:17:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:44.787 1+0 records in 00:09:44.787 1+0 records out 00:09:44.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038258 s, 10.7 MB/s 00:09:44.787 22:17:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.787 22:17:43 -- common/autotest_common.sh@874 -- # size=4096 00:09:44.787 22:17:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.787 22:17:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:44.787 22:17:43 -- common/autotest_common.sh@877 -- # return 0 00:09:44.787 22:17:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.787 22:17:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:44.787 22:17:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:45.045 /dev/nbd1 00:09:45.045 22:17:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:45.045 22:17:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:45.046 22:17:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:45.046 22:17:44 -- common/autotest_common.sh@857 -- # local i 00:09:45.046 22:17:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:45.046 22:17:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:45.046 22:17:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:45.046 22:17:44 -- common/autotest_common.sh@861 -- # break 00:09:45.046 22:17:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:45.046 22:17:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:45.046 22:17:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.046 1+0 records in 00:09:45.046 1+0 records out 00:09:45.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498698 s, 8.2 MB/s 00:09:45.046 22:17:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.046 22:17:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:45.046 22:17:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.046 22:17:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:45.046 22:17:44 -- common/autotest_common.sh@877 -- # return 0 00:09:45.046 22:17:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.046 22:17:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.046 22:17:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:45.046 22:17:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.046 22:17:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:45.304 22:17:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:45.304 { 00:09:45.304 "nbd_device": "/dev/nbd0", 00:09:45.304 "bdev_name": "Malloc0" 00:09:45.304 }, 00:09:45.304 { 00:09:45.304 "nbd_device": "/dev/nbd1", 00:09:45.304 "bdev_name": "Malloc1" 00:09:45.304 } 00:09:45.304 ]' 00:09:45.304 22:17:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:45.304 { 00:09:45.304 "nbd_device": "/dev/nbd0", 00:09:45.304 "bdev_name": "Malloc0" 00:09:45.304 }, 00:09:45.304 { 00:09:45.304 "nbd_device": "/dev/nbd1", 00:09:45.304 "bdev_name": "Malloc1" 00:09:45.304 } 00:09:45.304 ]' 00:09:45.304 22:17:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:45.563 /dev/nbd1' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:45.563 /dev/nbd1' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@65 -- # count=2 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@95 -- # count=2 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:45.563 256+0 records in 00:09:45.563 256+0 records out 00:09:45.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00941373 s, 111 MB/s 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:45.563 256+0 records in 00:09:45.563 256+0 records out 00:09:45.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233967 s, 44.8 MB/s 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:45.563 256+0 records in 00:09:45.563 256+0 records out 00:09:45.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338998 s, 30.9 MB/s 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@51 -- # local i 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.563 22:17:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@41 -- # break 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.821 22:17:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.079 22:17:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@41 -- # break 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.338 22:17:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@65 -- # true 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@104 -- # count=0 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:46.597 22:17:45 -- bdev/nbd_common.sh@109 -- # return 0 00:09:46.597 22:17:45 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:46.855 22:17:45 -- event/event.sh@35 -- # sleep 3 00:09:48.227 [2024-07-14 22:17:46.900278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.227 [2024-07-14 22:17:47.073709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.227 [2024-07-14 22:17:47.073714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.227 [2024-07-14 22:17:47.234741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:48.227 [2024-07-14 22:17:47.234888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:50.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.128 22:17:48 -- event/event.sh@38 -- # waitforlisten 106573 /var/tmp/spdk-nbd.sock 00:09:50.128 22:17:48 -- common/autotest_common.sh@819 -- # '[' -z 106573 ']' 00:09:50.128 22:17:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.128 22:17:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:50.128 22:17:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.128 22:17:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:50.128 22:17:48 -- common/autotest_common.sh@10 -- # set +x 00:09:50.128 22:17:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:50.128 22:17:49 -- common/autotest_common.sh@852 -- # return 0 00:09:50.128 22:17:49 -- event/event.sh@39 -- # killprocess 106573 00:09:50.128 22:17:49 -- common/autotest_common.sh@926 -- # '[' -z 106573 ']' 00:09:50.128 22:17:49 -- common/autotest_common.sh@930 -- # kill -0 106573 00:09:50.128 22:17:49 -- common/autotest_common.sh@931 -- # uname 00:09:50.128 22:17:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:50.128 22:17:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106573 00:09:50.128 killing process with pid 106573 00:09:50.128 22:17:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:50.128 22:17:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:50.128 22:17:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106573' 00:09:50.128 22:17:49 -- common/autotest_common.sh@945 -- # kill 106573 00:09:50.128 22:17:49 -- common/autotest_common.sh@950 -- # wait 106573 00:09:51.090 spdk_app_start is called in Round 0. 00:09:51.090 Shutdown signal received, stop current app iteration 00:09:51.090 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:51.090 spdk_app_start is called in Round 1. 00:09:51.090 Shutdown signal received, stop current app iteration 00:09:51.090 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:51.090 spdk_app_start is called in Round 2. 00:09:51.090 Shutdown signal received, stop current app iteration 00:09:51.090 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:51.090 spdk_app_start is called in Round 3. 00:09:51.090 Shutdown signal received, stop current app iteration 00:09:51.090 ************************************ 00:09:51.090 END TEST app_repeat 00:09:51.090 ************************************ 00:09:51.090 22:17:50 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:51.090 22:17:50 -- event/event.sh@42 -- # return 0 00:09:51.090 00:09:51.090 real 0m20.274s 00:09:51.090 user 0m43.667s 00:09:51.090 sys 0m2.789s 00:09:51.090 22:17:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.090 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:09:51.090 22:17:50 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:51.090 22:17:50 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:51.090 22:17:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:51.090 22:17:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.090 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:09:51.090 ************************************ 00:09:51.090 START TEST cpu_locks 00:09:51.090 ************************************ 00:09:51.090 22:17:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:51.348 * Looking for test storage... 00:09:51.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:51.348 22:17:50 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:51.348 22:17:50 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:51.348 22:17:50 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:51.348 22:17:50 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:51.348 22:17:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:51.348 22:17:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.348 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:09:51.348 ************************************ 00:09:51.348 START TEST default_locks 00:09:51.348 ************************************ 00:09:51.348 22:17:50 -- common/autotest_common.sh@1104 -- # default_locks 00:09:51.348 22:17:50 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107146 00:09:51.348 22:17:50 -- event/cpu_locks.sh@47 -- # waitforlisten 107146 00:09:51.348 22:17:50 -- common/autotest_common.sh@819 -- # '[' -z 107146 ']' 00:09:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.348 22:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.348 22:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.348 22:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.348 22:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.348 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:09:51.348 22:17:50 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.348 [2024-07-14 22:17:50.272456] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:51.348 [2024-07-14 22:17:50.272935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107146 ] 00:09:51.608 [2024-07-14 22:17:50.430534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.608 [2024-07-14 22:17:50.606124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.608 [2024-07-14 22:17:50.606388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.985 22:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.985 22:17:51 -- common/autotest_common.sh@852 -- # return 0 00:09:52.985 22:17:51 -- event/cpu_locks.sh@49 -- # locks_exist 107146 00:09:52.985 22:17:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:52.985 22:17:51 -- event/cpu_locks.sh@22 -- # lslocks -p 107146 00:09:53.243 22:17:52 -- event/cpu_locks.sh@50 -- # killprocess 107146 00:09:53.243 22:17:52 -- common/autotest_common.sh@926 -- # '[' -z 107146 ']' 00:09:53.243 22:17:52 -- common/autotest_common.sh@930 -- # kill -0 107146 00:09:53.243 22:17:52 -- common/autotest_common.sh@931 -- # uname 00:09:53.243 22:17:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:53.243 22:17:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107146 00:09:53.243 22:17:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:53.243 killing process with pid 107146 00:09:53.243 22:17:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:53.243 22:17:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107146' 00:09:53.243 22:17:52 -- common/autotest_common.sh@945 -- # kill 107146 00:09:53.244 22:17:52 -- common/autotest_common.sh@950 -- # wait 107146 00:09:55.146 22:17:54 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107146 00:09:55.146 22:17:54 -- common/autotest_common.sh@640 -- # local es=0 00:09:55.147 22:17:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107146 00:09:55.147 22:17:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:55.147 22:17:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.147 22:17:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:55.147 22:17:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.147 22:17:54 -- common/autotest_common.sh@643 -- # waitforlisten 107146 00:09:55.147 22:17:54 -- common/autotest_common.sh@819 -- # '[' -z 107146 ']' 00:09:55.147 22:17:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.147 22:17:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:55.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.147 22:17:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.147 22:17:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:55.147 22:17:54 -- common/autotest_common.sh@10 -- # set +x 00:09:55.147 ERROR: process (pid: 107146) is no longer running 00:09:55.147 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107146) - No such process 00:09:55.147 22:17:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:55.147 22:17:54 -- common/autotest_common.sh@852 -- # return 1 00:09:55.147 22:17:54 -- common/autotest_common.sh@643 -- # es=1 00:09:55.147 22:17:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:55.147 22:17:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:55.147 22:17:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:55.147 22:17:54 -- event/cpu_locks.sh@54 -- # no_locks 00:09:55.147 22:17:54 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:55.147 22:17:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:55.147 22:17:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:55.147 00:09:55.147 real 0m3.862s 00:09:55.147 user 0m4.004s 00:09:55.147 sys 0m0.614s 00:09:55.147 ************************************ 00:09:55.147 END TEST default_locks 00:09:55.147 22:17:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.147 22:17:54 -- common/autotest_common.sh@10 -- # set +x 00:09:55.147 ************************************ 00:09:55.147 22:17:54 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:55.147 22:17:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:55.147 22:17:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.147 22:17:54 -- common/autotest_common.sh@10 -- # set +x 00:09:55.147 ************************************ 00:09:55.147 START TEST default_locks_via_rpc 00:09:55.147 ************************************ 00:09:55.147 22:17:54 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:55.147 22:17:54 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107252 00:09:55.147 22:17:54 -- event/cpu_locks.sh@63 -- # waitforlisten 107252 00:09:55.147 22:17:54 -- common/autotest_common.sh@819 -- # '[' -z 107252 ']' 00:09:55.147 22:17:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.147 22:17:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:55.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.147 22:17:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.147 22:17:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:55.147 22:17:54 -- common/autotest_common.sh@10 -- # set +x 00:09:55.147 22:17:54 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:55.147 [2024-07-14 22:17:54.192982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:55.147 [2024-07-14 22:17:54.193328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107252 ] 00:09:55.405 [2024-07-14 22:17:54.359495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.663 [2024-07-14 22:17:54.515755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:55.663 [2024-07-14 22:17:54.516025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.041 22:17:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:57.041 22:17:55 -- common/autotest_common.sh@852 -- # return 0 00:09:57.041 22:17:55 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:57.041 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:57.041 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:09:57.041 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:57.041 22:17:55 -- event/cpu_locks.sh@67 -- # no_locks 00:09:57.041 22:17:55 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:57.041 22:17:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:57.041 22:17:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:57.041 22:17:55 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:57.041 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:57.041 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:09:57.041 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:57.041 22:17:55 -- event/cpu_locks.sh@71 -- # locks_exist 107252 00:09:57.041 22:17:55 -- event/cpu_locks.sh@22 -- # lslocks -p 107252 00:09:57.041 22:17:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:57.041 22:17:55 -- event/cpu_locks.sh@73 -- # killprocess 107252 00:09:57.041 22:17:55 -- common/autotest_common.sh@926 -- # '[' -z 107252 ']' 00:09:57.041 22:17:55 -- common/autotest_common.sh@930 -- # kill -0 107252 00:09:57.041 22:17:55 -- common/autotest_common.sh@931 -- # uname 00:09:57.041 22:17:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:57.041 22:17:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107252 00:09:57.041 22:17:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:57.041 killing process with pid 107252 00:09:57.041 22:17:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:57.041 22:17:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107252' 00:09:57.041 22:17:55 -- common/autotest_common.sh@945 -- # kill 107252 00:09:57.041 22:17:55 -- common/autotest_common.sh@950 -- # wait 107252 00:09:58.945 00:09:58.945 real 0m3.616s 00:09:58.945 user 0m3.729s 00:09:58.945 sys 0m0.599s 00:09:58.945 22:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.945 ************************************ 00:09:58.945 END TEST default_locks_via_rpc 00:09:58.945 ************************************ 00:09:58.945 22:17:57 -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 22:17:57 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:58.945 22:17:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:58.945 22:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.945 22:17:57 -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 ************************************ 00:09:58.945 START TEST non_locking_app_on_locked_coremask 00:09:58.945 ************************************ 00:09:58.945 22:17:57 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:58.945 22:17:57 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=107325 00:09:58.945 22:17:57 -- event/cpu_locks.sh@81 -- # waitforlisten 107325 /var/tmp/spdk.sock 00:09:58.945 22:17:57 -- common/autotest_common.sh@819 -- # '[' -z 107325 ']' 00:09:58.945 22:17:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.945 22:17:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:58.945 22:17:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.945 22:17:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:58.945 22:17:57 -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 22:17:57 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:58.945 [2024-07-14 22:17:57.866335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:58.945 [2024-07-14 22:17:57.867370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107325 ] 00:09:59.204 [2024-07-14 22:17:58.034072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.204 [2024-07-14 22:17:58.195004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:59.204 [2024-07-14 22:17:58.195200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:00.581 22:17:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:00.581 22:17:59 -- common/autotest_common.sh@852 -- # return 0 00:10:00.581 22:17:59 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=107355 00:10:00.581 22:17:59 -- event/cpu_locks.sh@85 -- # waitforlisten 107355 /var/tmp/spdk2.sock 00:10:00.581 22:17:59 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:00.581 22:17:59 -- common/autotest_common.sh@819 -- # '[' -z 107355 ']' 00:10:00.581 22:17:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:00.581 22:17:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:00.581 22:17:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:00.581 22:17:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:00.581 22:17:59 -- common/autotest_common.sh@10 -- # set +x 00:10:00.581 [2024-07-14 22:17:59.474830] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:00.581 [2024-07-14 22:17:59.475479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107355 ] 00:10:00.581 [2024-07-14 22:17:59.635540] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:00.581 [2024-07-14 22:17:59.635610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.147 [2024-07-14 22:17:59.938500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:01.147 [2024-07-14 22:17:59.938722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.051 22:18:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:03.051 22:18:01 -- common/autotest_common.sh@852 -- # return 0 00:10:03.051 22:18:01 -- event/cpu_locks.sh@87 -- # locks_exist 107325 00:10:03.051 22:18:01 -- event/cpu_locks.sh@22 -- # lslocks -p 107325 00:10:03.051 22:18:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:03.310 22:18:02 -- event/cpu_locks.sh@89 -- # killprocess 107325 00:10:03.310 22:18:02 -- common/autotest_common.sh@926 -- # '[' -z 107325 ']' 00:10:03.310 22:18:02 -- common/autotest_common.sh@930 -- # kill -0 107325 00:10:03.310 22:18:02 -- common/autotest_common.sh@931 -- # uname 00:10:03.310 22:18:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:03.310 22:18:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107325 00:10:03.310 22:18:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:03.310 22:18:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:03.310 22:18:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107325' 00:10:03.310 killing process with pid 107325 00:10:03.310 22:18:02 -- common/autotest_common.sh@945 -- # kill 107325 00:10:03.310 22:18:02 -- common/autotest_common.sh@950 -- # wait 107325 00:10:06.657 22:18:05 -- event/cpu_locks.sh@90 -- # killprocess 107355 00:10:06.657 22:18:05 -- common/autotest_common.sh@926 -- # '[' -z 107355 ']' 00:10:06.657 22:18:05 -- common/autotest_common.sh@930 -- # kill -0 107355 00:10:06.657 22:18:05 -- common/autotest_common.sh@931 -- # uname 00:10:06.657 22:18:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:06.657 22:18:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107355 00:10:06.657 killing process with pid 107355 00:10:06.657 22:18:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:06.657 22:18:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:06.657 22:18:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107355' 00:10:06.657 22:18:05 -- common/autotest_common.sh@945 -- # kill 107355 00:10:06.657 22:18:05 -- common/autotest_common.sh@950 -- # wait 107355 00:10:08.558 ************************************ 00:10:08.558 END TEST non_locking_app_on_locked_coremask 00:10:08.558 ************************************ 00:10:08.558 00:10:08.558 real 0m9.642s 00:10:08.558 user 0m10.319s 00:10:08.558 sys 0m1.209s 00:10:08.558 22:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.558 22:18:07 -- common/autotest_common.sh@10 -- # set +x 00:10:08.558 22:18:07 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:08.558 22:18:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:08.558 22:18:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.558 22:18:07 -- common/autotest_common.sh@10 -- # set +x 00:10:08.558 ************************************ 00:10:08.558 START TEST locking_app_on_unlocked_coremask 00:10:08.558 ************************************ 00:10:08.558 22:18:07 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:10:08.558 22:18:07 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=107508 00:10:08.558 22:18:07 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:08.558 22:18:07 -- event/cpu_locks.sh@99 -- # waitforlisten 107508 /var/tmp/spdk.sock 00:10:08.558 22:18:07 -- common/autotest_common.sh@819 -- # '[' -z 107508 ']' 00:10:08.558 22:18:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.558 22:18:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.558 22:18:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.558 22:18:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.558 22:18:07 -- common/autotest_common.sh@10 -- # set +x 00:10:08.558 [2024-07-14 22:18:07.564660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:08.558 [2024-07-14 22:18:07.565080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107508 ] 00:10:08.816 [2024-07-14 22:18:07.730100] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:08.816 [2024-07-14 22:18:07.730333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.075 [2024-07-14 22:18:07.887208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:09.075 [2024-07-14 22:18:07.887692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.450 22:18:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:10.450 22:18:09 -- common/autotest_common.sh@852 -- # return 0 00:10:10.450 22:18:09 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=107531 00:10:10.450 22:18:09 -- event/cpu_locks.sh@103 -- # waitforlisten 107531 /var/tmp/spdk2.sock 00:10:10.450 22:18:09 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:10.450 22:18:09 -- common/autotest_common.sh@819 -- # '[' -z 107531 ']' 00:10:10.450 22:18:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.450 22:18:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:10.450 22:18:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.450 22:18:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:10.450 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:10:10.450 [2024-07-14 22:18:09.265160] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:10.450 [2024-07-14 22:18:09.265844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107531 ] 00:10:10.450 [2024-07-14 22:18:09.411895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.709 [2024-07-14 22:18:09.731410] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:10.709 [2024-07-14 22:18:09.731648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.632 22:18:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:12.632 22:18:11 -- common/autotest_common.sh@852 -- # return 0 00:10:12.632 22:18:11 -- event/cpu_locks.sh@105 -- # locks_exist 107531 00:10:12.632 22:18:11 -- event/cpu_locks.sh@22 -- # lslocks -p 107531 00:10:12.632 22:18:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:12.895 22:18:11 -- event/cpu_locks.sh@107 -- # killprocess 107508 00:10:12.895 22:18:11 -- common/autotest_common.sh@926 -- # '[' -z 107508 ']' 00:10:12.895 22:18:11 -- common/autotest_common.sh@930 -- # kill -0 107508 00:10:12.895 22:18:11 -- common/autotest_common.sh@931 -- # uname 00:10:12.895 22:18:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:12.895 22:18:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107508 00:10:12.895 killing process with pid 107508 00:10:12.895 22:18:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:12.895 22:18:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:12.895 22:18:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107508' 00:10:12.895 22:18:11 -- common/autotest_common.sh@945 -- # kill 107508 00:10:12.895 22:18:11 -- common/autotest_common.sh@950 -- # wait 107508 00:10:17.083 22:18:15 -- event/cpu_locks.sh@108 -- # killprocess 107531 00:10:17.083 22:18:15 -- common/autotest_common.sh@926 -- # '[' -z 107531 ']' 00:10:17.083 22:18:15 -- common/autotest_common.sh@930 -- # kill -0 107531 00:10:17.083 22:18:15 -- common/autotest_common.sh@931 -- # uname 00:10:17.083 22:18:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:17.083 22:18:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107531 00:10:17.083 killing process with pid 107531 00:10:17.083 22:18:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:17.083 22:18:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:17.083 22:18:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107531' 00:10:17.083 22:18:15 -- common/autotest_common.sh@945 -- # kill 107531 00:10:17.083 22:18:15 -- common/autotest_common.sh@950 -- # wait 107531 00:10:18.465 ************************************ 00:10:18.465 END TEST locking_app_on_unlocked_coremask 00:10:18.465 ************************************ 00:10:18.465 00:10:18.465 real 0m9.608s 00:10:18.465 user 0m10.395s 00:10:18.465 sys 0m1.079s 00:10:18.465 22:18:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.465 22:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:18.465 22:18:17 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:18.465 22:18:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:18.465 22:18:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.465 22:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:18.465 ************************************ 00:10:18.465 START TEST locking_app_on_locked_coremask 00:10:18.465 ************************************ 00:10:18.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.465 22:18:17 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:18.465 22:18:17 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=107686 00:10:18.465 22:18:17 -- event/cpu_locks.sh@116 -- # waitforlisten 107686 /var/tmp/spdk.sock 00:10:18.465 22:18:17 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:18.465 22:18:17 -- common/autotest_common.sh@819 -- # '[' -z 107686 ']' 00:10:18.465 22:18:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.465 22:18:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:18.465 22:18:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.465 22:18:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:18.465 22:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:18.465 [2024-07-14 22:18:17.240987] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:18.465 [2024-07-14 22:18:17.241412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107686 ] 00:10:18.465 [2024-07-14 22:18:17.410932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.724 [2024-07-14 22:18:17.591528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:18.724 [2024-07-14 22:18:17.592011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.098 22:18:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.098 22:18:18 -- common/autotest_common.sh@852 -- # return 0 00:10:20.098 22:18:18 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=107716 00:10:20.098 22:18:18 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 107716 /var/tmp/spdk2.sock 00:10:20.098 22:18:18 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:20.098 22:18:18 -- common/autotest_common.sh@640 -- # local es=0 00:10:20.098 22:18:18 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107716 /var/tmp/spdk2.sock 00:10:20.098 22:18:18 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:20.098 22:18:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:20.098 22:18:18 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:20.098 22:18:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:20.098 22:18:18 -- common/autotest_common.sh@643 -- # waitforlisten 107716 /var/tmp/spdk2.sock 00:10:20.098 22:18:18 -- common/autotest_common.sh@819 -- # '[' -z 107716 ']' 00:10:20.098 22:18:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.098 22:18:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.098 22:18:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.098 22:18:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.098 22:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:20.098 [2024-07-14 22:18:18.857581] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:20.098 [2024-07-14 22:18:18.858037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107716 ] 00:10:20.098 [2024-07-14 22:18:19.019412] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 107686 has claimed it. 00:10:20.098 [2024-07-14 22:18:19.019509] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:20.664 ERROR: process (pid: 107716) is no longer running 00:10:20.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107716) - No such process 00:10:20.664 22:18:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.664 22:18:19 -- common/autotest_common.sh@852 -- # return 1 00:10:20.664 22:18:19 -- common/autotest_common.sh@643 -- # es=1 00:10:20.664 22:18:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:20.664 22:18:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:20.664 22:18:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:20.664 22:18:19 -- event/cpu_locks.sh@122 -- # locks_exist 107686 00:10:20.664 22:18:19 -- event/cpu_locks.sh@22 -- # lslocks -p 107686 00:10:20.664 22:18:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:20.664 22:18:19 -- event/cpu_locks.sh@124 -- # killprocess 107686 00:10:20.664 22:18:19 -- common/autotest_common.sh@926 -- # '[' -z 107686 ']' 00:10:20.664 22:18:19 -- common/autotest_common.sh@930 -- # kill -0 107686 00:10:20.664 22:18:19 -- common/autotest_common.sh@931 -- # uname 00:10:20.664 22:18:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:20.664 22:18:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107686 00:10:20.922 killing process with pid 107686 00:10:20.922 22:18:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:20.922 22:18:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:20.922 22:18:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107686' 00:10:20.922 22:18:19 -- common/autotest_common.sh@945 -- # kill 107686 00:10:20.922 22:18:19 -- common/autotest_common.sh@950 -- # wait 107686 00:10:22.821 ************************************ 00:10:22.821 END TEST locking_app_on_locked_coremask 00:10:22.821 ************************************ 00:10:22.821 00:10:22.821 real 0m4.342s 00:10:22.821 user 0m4.620s 00:10:22.821 sys 0m0.772s 00:10:22.821 22:18:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.821 22:18:21 -- common/autotest_common.sh@10 -- # set +x 00:10:22.821 22:18:21 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:22.821 22:18:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:22.821 22:18:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.821 22:18:21 -- common/autotest_common.sh@10 -- # set +x 00:10:22.821 ************************************ 00:10:22.821 START TEST locking_overlapped_coremask 00:10:22.821 ************************************ 00:10:22.821 22:18:21 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:22.821 22:18:21 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=107778 00:10:22.821 22:18:21 -- event/cpu_locks.sh@133 -- # waitforlisten 107778 /var/tmp/spdk.sock 00:10:22.821 22:18:21 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:22.821 22:18:21 -- common/autotest_common.sh@819 -- # '[' -z 107778 ']' 00:10:22.821 22:18:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.821 22:18:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:22.821 22:18:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.821 22:18:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:22.821 22:18:21 -- common/autotest_common.sh@10 -- # set +x 00:10:22.821 [2024-07-14 22:18:21.626678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:22.821 [2024-07-14 22:18:21.627051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107778 ] 00:10:22.821 [2024-07-14 22:18:21.804737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.080 [2024-07-14 22:18:21.980272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:23.080 [2024-07-14 22:18:21.980901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.080 [2024-07-14 22:18:21.981076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.080 [2024-07-14 22:18:21.981065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.455 22:18:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:24.455 22:18:23 -- common/autotest_common.sh@852 -- # return 0 00:10:24.455 22:18:23 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=107810 00:10:24.455 22:18:23 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 107810 /var/tmp/spdk2.sock 00:10:24.455 22:18:23 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:24.455 22:18:23 -- common/autotest_common.sh@640 -- # local es=0 00:10:24.455 22:18:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107810 /var/tmp/spdk2.sock 00:10:24.455 22:18:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:24.455 22:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:24.455 22:18:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:24.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:24.455 22:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:24.455 22:18:23 -- common/autotest_common.sh@643 -- # waitforlisten 107810 /var/tmp/spdk2.sock 00:10:24.455 22:18:23 -- common/autotest_common.sh@819 -- # '[' -z 107810 ']' 00:10:24.455 22:18:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:24.455 22:18:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:24.455 22:18:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:24.455 22:18:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:24.455 22:18:23 -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 [2024-07-14 22:18:23.314242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:24.455 [2024-07-14 22:18:23.314925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107810 ] 00:10:24.455 [2024-07-14 22:18:23.515760] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107778 has claimed it. 00:10:24.714 [2024-07-14 22:18:23.515884] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:24.971 ERROR: process (pid: 107810) is no longer running 00:10:24.971 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107810) - No such process 00:10:24.971 22:18:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:24.971 22:18:23 -- common/autotest_common.sh@852 -- # return 1 00:10:24.971 22:18:23 -- common/autotest_common.sh@643 -- # es=1 00:10:24.971 22:18:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:24.971 22:18:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:24.971 22:18:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:24.971 22:18:23 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:24.971 22:18:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:24.971 22:18:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:24.972 22:18:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:24.972 22:18:23 -- event/cpu_locks.sh@141 -- # killprocess 107778 00:10:24.972 22:18:23 -- common/autotest_common.sh@926 -- # '[' -z 107778 ']' 00:10:24.972 22:18:23 -- common/autotest_common.sh@930 -- # kill -0 107778 00:10:24.972 22:18:23 -- common/autotest_common.sh@931 -- # uname 00:10:24.972 22:18:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:24.972 22:18:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107778 00:10:24.972 killing process with pid 107778 00:10:24.972 22:18:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:24.972 22:18:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:24.972 22:18:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107778' 00:10:24.972 22:18:23 -- common/autotest_common.sh@945 -- # kill 107778 00:10:24.972 22:18:23 -- common/autotest_common.sh@950 -- # wait 107778 00:10:26.872 ************************************ 00:10:26.872 END TEST locking_overlapped_coremask 00:10:26.872 ************************************ 00:10:26.872 00:10:26.873 real 0m4.316s 00:10:26.873 user 0m11.706s 00:10:26.873 sys 0m0.608s 00:10:26.873 22:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.873 22:18:25 -- common/autotest_common.sh@10 -- # set +x 00:10:26.873 22:18:25 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:26.873 22:18:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:26.873 22:18:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.873 22:18:25 -- common/autotest_common.sh@10 -- # set +x 00:10:26.873 ************************************ 00:10:26.873 START TEST locking_overlapped_coremask_via_rpc 00:10:26.873 ************************************ 00:10:26.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.873 22:18:25 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:26.873 22:18:25 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=107899 00:10:26.873 22:18:25 -- event/cpu_locks.sh@149 -- # waitforlisten 107899 /var/tmp/spdk.sock 00:10:26.873 22:18:25 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:26.873 22:18:25 -- common/autotest_common.sh@819 -- # '[' -z 107899 ']' 00:10:26.873 22:18:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.873 22:18:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:26.873 22:18:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.873 22:18:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:26.873 22:18:25 -- common/autotest_common.sh@10 -- # set +x 00:10:27.131 [2024-07-14 22:18:25.992791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:27.131 [2024-07-14 22:18:25.993200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107899 ] 00:10:27.131 [2024-07-14 22:18:26.169531] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:27.131 [2024-07-14 22:18:26.169777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.390 [2024-07-14 22:18:26.328679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:27.390 [2024-07-14 22:18:26.329331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.390 [2024-07-14 22:18:26.329474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.390 [2024-07-14 22:18:26.329481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.765 22:18:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:28.765 22:18:27 -- common/autotest_common.sh@852 -- # return 0 00:10:28.765 22:18:27 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=107924 00:10:28.765 22:18:27 -- event/cpu_locks.sh@153 -- # waitforlisten 107924 /var/tmp/spdk2.sock 00:10:28.765 22:18:27 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:28.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:28.765 22:18:27 -- common/autotest_common.sh@819 -- # '[' -z 107924 ']' 00:10:28.765 22:18:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:28.765 22:18:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:28.765 22:18:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:28.765 22:18:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:28.765 22:18:27 -- common/autotest_common.sh@10 -- # set +x 00:10:28.765 [2024-07-14 22:18:27.708501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:28.765 [2024-07-14 22:18:27.709070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107924 ] 00:10:29.023 [2024-07-14 22:18:27.904744] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:29.023 [2024-07-14 22:18:27.904810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.280 [2024-07-14 22:18:28.294038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:29.280 [2024-07-14 22:18:28.294581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.280 [2024-07-14 22:18:28.307981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.280 [2024-07-14 22:18:28.307987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:31.182 22:18:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:31.182 22:18:30 -- common/autotest_common.sh@852 -- # return 0 00:10:31.182 22:18:30 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:31.182 22:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:31.182 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:10:31.182 22:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:31.182 22:18:30 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:31.182 22:18:30 -- common/autotest_common.sh@640 -- # local es=0 00:10:31.182 22:18:30 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:31.182 22:18:30 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:31.182 22:18:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.182 22:18:30 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:31.182 22:18:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.182 22:18:30 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:31.182 22:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:31.182 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:10:31.182 [2024-07-14 22:18:30.028119] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107899 has claimed it. 00:10:31.182 request: 00:10:31.182 { 00:10:31.182 "method": "framework_enable_cpumask_locks", 00:10:31.182 "req_id": 1 00:10:31.182 } 00:10:31.182 Got JSON-RPC error response 00:10:31.182 response: 00:10:31.182 { 00:10:31.182 "code": -32603, 00:10:31.182 "message": "Failed to claim CPU core: 2" 00:10:31.182 } 00:10:31.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.182 22:18:30 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:31.182 22:18:30 -- common/autotest_common.sh@643 -- # es=1 00:10:31.182 22:18:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:31.182 22:18:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:31.182 22:18:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:31.182 22:18:30 -- event/cpu_locks.sh@158 -- # waitforlisten 107899 /var/tmp/spdk.sock 00:10:31.182 22:18:30 -- common/autotest_common.sh@819 -- # '[' -z 107899 ']' 00:10:31.182 22:18:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.182 22:18:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:31.182 22:18:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.182 22:18:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:31.182 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:10:31.440 22:18:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:31.440 22:18:30 -- common/autotest_common.sh@852 -- # return 0 00:10:31.440 22:18:30 -- event/cpu_locks.sh@159 -- # waitforlisten 107924 /var/tmp/spdk2.sock 00:10:31.440 22:18:30 -- common/autotest_common.sh@819 -- # '[' -z 107924 ']' 00:10:31.440 22:18:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:31.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:31.440 22:18:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:31.440 22:18:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:31.440 22:18:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:31.440 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:10:31.698 22:18:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:31.698 22:18:30 -- common/autotest_common.sh@852 -- # return 0 00:10:31.698 22:18:30 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:31.698 22:18:30 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:31.698 22:18:30 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:31.698 22:18:30 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:31.698 00:10:31.698 real 0m4.607s 00:10:31.698 user 0m1.795s 00:10:31.698 sys 0m0.235s 00:10:31.698 22:18:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.698 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:10:31.698 ************************************ 00:10:31.698 END TEST locking_overlapped_coremask_via_rpc 00:10:31.698 ************************************ 00:10:31.698 22:18:30 -- event/cpu_locks.sh@174 -- # cleanup 00:10:31.698 22:18:30 -- event/cpu_locks.sh@15 -- # [[ -z 107899 ]] 00:10:31.698 22:18:30 -- event/cpu_locks.sh@15 -- # killprocess 107899 00:10:31.698 22:18:30 -- common/autotest_common.sh@926 -- # '[' -z 107899 ']' 00:10:31.698 22:18:30 -- common/autotest_common.sh@930 -- # kill -0 107899 00:10:31.698 22:18:30 -- common/autotest_common.sh@931 -- # uname 00:10:31.698 22:18:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:31.698 22:18:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107899 00:10:31.698 killing process with pid 107899 00:10:31.698 22:18:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:31.698 22:18:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:31.698 22:18:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107899' 00:10:31.698 22:18:30 -- common/autotest_common.sh@945 -- # kill 107899 00:10:31.698 22:18:30 -- common/autotest_common.sh@950 -- # wait 107899 00:10:33.599 22:18:32 -- event/cpu_locks.sh@16 -- # [[ -z 107924 ]] 00:10:33.599 22:18:32 -- event/cpu_locks.sh@16 -- # killprocess 107924 00:10:33.599 22:18:32 -- common/autotest_common.sh@926 -- # '[' -z 107924 ']' 00:10:33.599 22:18:32 -- common/autotest_common.sh@930 -- # kill -0 107924 00:10:33.599 22:18:32 -- common/autotest_common.sh@931 -- # uname 00:10:33.599 22:18:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:33.599 22:18:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107924 00:10:33.599 killing process with pid 107924 00:10:33.599 22:18:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:33.599 22:18:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:33.599 22:18:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107924' 00:10:33.599 22:18:32 -- common/autotest_common.sh@945 -- # kill 107924 00:10:33.599 22:18:32 -- common/autotest_common.sh@950 -- # wait 107924 00:10:35.498 22:18:34 -- event/cpu_locks.sh@18 -- # rm -f 00:10:35.499 Process with pid 107899 is not found 00:10:35.499 Process with pid 107924 is not found 00:10:35.499 22:18:34 -- event/cpu_locks.sh@1 -- # cleanup 00:10:35.499 22:18:34 -- event/cpu_locks.sh@15 -- # [[ -z 107899 ]] 00:10:35.499 22:18:34 -- event/cpu_locks.sh@15 -- # killprocess 107899 00:10:35.499 22:18:34 -- common/autotest_common.sh@926 -- # '[' -z 107899 ']' 00:10:35.499 22:18:34 -- common/autotest_common.sh@930 -- # kill -0 107899 00:10:35.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107899) - No such process 00:10:35.499 22:18:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107899 is not found' 00:10:35.499 22:18:34 -- event/cpu_locks.sh@16 -- # [[ -z 107924 ]] 00:10:35.499 22:18:34 -- event/cpu_locks.sh@16 -- # killprocess 107924 00:10:35.499 22:18:34 -- common/autotest_common.sh@926 -- # '[' -z 107924 ']' 00:10:35.499 22:18:34 -- common/autotest_common.sh@930 -- # kill -0 107924 00:10:35.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107924) - No such process 00:10:35.499 22:18:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107924 is not found' 00:10:35.499 22:18:34 -- event/cpu_locks.sh@18 -- # rm -f 00:10:35.499 ************************************ 00:10:35.499 END TEST cpu_locks 00:10:35.499 ************************************ 00:10:35.499 00:10:35.499 real 0m44.331s 00:10:35.499 user 1m18.117s 00:10:35.499 sys 0m6.210s 00:10:35.499 22:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.499 22:18:34 -- common/autotest_common.sh@10 -- # set +x 00:10:35.499 ************************************ 00:10:35.499 END TEST event 00:10:35.499 ************************************ 00:10:35.499 00:10:35.499 real 1m14.362s 00:10:35.499 user 2m15.809s 00:10:35.499 sys 0m10.032s 00:10:35.499 22:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.499 22:18:34 -- common/autotest_common.sh@10 -- # set +x 00:10:35.499 22:18:34 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:35.499 22:18:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:35.499 22:18:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.499 22:18:34 -- common/autotest_common.sh@10 -- # set +x 00:10:35.499 ************************************ 00:10:35.499 START TEST thread 00:10:35.499 ************************************ 00:10:35.499 22:18:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:35.757 * Looking for test storage... 00:10:35.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:35.757 22:18:34 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:35.757 22:18:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:35.757 22:18:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.757 22:18:34 -- common/autotest_common.sh@10 -- # set +x 00:10:35.757 ************************************ 00:10:35.757 START TEST thread_poller_perf 00:10:35.757 ************************************ 00:10:35.757 22:18:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:35.757 [2024-07-14 22:18:34.648056] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:35.757 [2024-07-14 22:18:34.649062] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108113 ] 00:10:35.757 [2024-07-14 22:18:34.808950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.016 [2024-07-14 22:18:35.059507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.016 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:37.394 ====================================== 00:10:37.394 busy:2212398378 (cyc) 00:10:37.394 total_run_count: 377000 00:10:37.394 tsc_hz: 2200000000 (cyc) 00:10:37.394 ====================================== 00:10:37.394 poller_cost: 5868 (cyc), 2667 (nsec) 00:10:37.394 ************************************ 00:10:37.394 END TEST thread_poller_perf 00:10:37.394 ************************************ 00:10:37.394 00:10:37.394 real 0m1.765s 00:10:37.394 user 0m1.538s 00:10:37.394 sys 0m0.120s 00:10:37.394 22:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.394 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:10:37.394 22:18:36 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:37.394 22:18:36 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:37.394 22:18:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.394 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:10:37.394 ************************************ 00:10:37.394 START TEST thread_poller_perf 00:10:37.394 ************************************ 00:10:37.394 22:18:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:37.652 [2024-07-14 22:18:36.462034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:37.652 [2024-07-14 22:18:36.462342] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108178 ] 00:10:37.652 [2024-07-14 22:18:36.614938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.914 [2024-07-14 22:18:36.771447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.914 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:39.291 ====================================== 00:10:39.291 busy:2204934608 (cyc) 00:10:39.291 total_run_count: 4865000 00:10:39.292 tsc_hz: 2200000000 (cyc) 00:10:39.292 ====================================== 00:10:39.292 poller_cost: 453 (cyc), 205 (nsec) 00:10:39.292 00:10:39.292 real 0m1.661s 00:10:39.292 user 0m1.471s 00:10:39.292 sys 0m0.088s 00:10:39.292 22:18:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.292 22:18:38 -- common/autotest_common.sh@10 -- # set +x 00:10:39.292 ************************************ 00:10:39.292 END TEST thread_poller_perf 00:10:39.292 ************************************ 00:10:39.292 22:18:38 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:39.292 22:18:38 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:39.292 22:18:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:39.292 22:18:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.292 22:18:38 -- common/autotest_common.sh@10 -- # set +x 00:10:39.292 ************************************ 00:10:39.292 START TEST thread_spdk_lock 00:10:39.292 ************************************ 00:10:39.292 22:18:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:39.292 [2024-07-14 22:18:38.194687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:39.292 [2024-07-14 22:18:38.195154] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108226 ] 00:10:39.550 [2024-07-14 22:18:38.368164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.550 [2024-07-14 22:18:38.535718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.550 [2024-07-14 22:18:38.535719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.485 [2024-07-14 22:18:39.221631] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:40.485 [2024-07-14 22:18:39.221891] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:40.485 [2024-07-14 22:18:39.221980] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5625c6f3a840 00:10:40.485 [2024-07-14 22:18:39.228963] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:40.485 [2024-07-14 22:18:39.229203] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:40.485 [2024-07-14 22:18:39.229356] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:40.485 Starting test contend 00:10:40.485 Worker Delay Wait us Hold us Total us 00:10:40.485 0 3 134897 202788 337685 00:10:40.485 1 5 33826 319635 353462 00:10:40.485 PASS test contend 00:10:40.485 Starting test hold_by_poller 00:10:40.485 PASS test hold_by_poller 00:10:40.485 Starting test hold_by_message 00:10:40.485 PASS test hold_by_message 00:10:40.485 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:40.485 100014 assertions passed 00:10:40.485 0 assertions failed 00:10:40.744 00:10:40.744 real 0m1.394s 00:10:40.744 user 0m1.866s 00:10:40.744 sys 0m0.120s 00:10:40.744 22:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.744 22:18:39 -- common/autotest_common.sh@10 -- # set +x 00:10:40.744 ************************************ 00:10:40.744 END TEST thread_spdk_lock 00:10:40.744 ************************************ 00:10:40.744 ************************************ 00:10:40.744 END TEST thread 00:10:40.744 ************************************ 00:10:40.744 00:10:40.744 real 0m5.061s 00:10:40.744 user 0m4.993s 00:10:40.744 sys 0m0.433s 00:10:40.744 22:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.744 22:18:39 -- common/autotest_common.sh@10 -- # set +x 00:10:40.744 22:18:39 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:40.744 22:18:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:40.744 22:18:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.744 22:18:39 -- common/autotest_common.sh@10 -- # set +x 00:10:40.744 ************************************ 00:10:40.744 START TEST accel 00:10:40.744 ************************************ 00:10:40.744 22:18:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:40.744 * Looking for test storage... 00:10:40.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:40.744 22:18:39 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:40.744 22:18:39 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:40.744 22:18:39 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:40.744 22:18:39 -- accel/accel.sh@59 -- # spdk_tgt_pid=108304 00:10:40.744 22:18:39 -- accel/accel.sh@60 -- # waitforlisten 108304 00:10:40.744 22:18:39 -- common/autotest_common.sh@819 -- # '[' -z 108304 ']' 00:10:40.744 22:18:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.744 22:18:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:40.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.744 22:18:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.744 22:18:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:40.744 22:18:39 -- common/autotest_common.sh@10 -- # set +x 00:10:40.744 22:18:39 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:40.744 22:18:39 -- accel/accel.sh@58 -- # build_accel_config 00:10:40.744 22:18:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.744 22:18:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.744 22:18:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.744 22:18:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.744 22:18:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.744 22:18:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.744 22:18:39 -- accel/accel.sh@42 -- # jq -r . 00:10:40.744 [2024-07-14 22:18:39.769062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:40.744 [2024-07-14 22:18:39.769261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108304 ] 00:10:41.003 [2024-07-14 22:18:39.926356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.261 [2024-07-14 22:18:40.103398] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:41.261 [2024-07-14 22:18:40.103649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.638 22:18:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:42.638 22:18:41 -- common/autotest_common.sh@852 -- # return 0 00:10:42.638 22:18:41 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:42.638 22:18:41 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:42.638 22:18:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:42.638 22:18:41 -- common/autotest_common.sh@10 -- # set +x 00:10:42.638 22:18:41 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:42.638 22:18:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # IFS== 00:10:42.638 22:18:41 -- accel/accel.sh@64 -- # read -r opc module 00:10:42.638 22:18:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:42.638 22:18:41 -- accel/accel.sh@67 -- # killprocess 108304 00:10:42.638 22:18:41 -- common/autotest_common.sh@926 -- # '[' -z 108304 ']' 00:10:42.638 22:18:41 -- common/autotest_common.sh@930 -- # kill -0 108304 00:10:42.638 22:18:41 -- common/autotest_common.sh@931 -- # uname 00:10:42.638 22:18:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:42.638 22:18:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108304 00:10:42.638 22:18:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:42.638 22:18:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:42.638 killing process with pid 108304 00:10:42.638 22:18:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108304' 00:10:42.638 22:18:41 -- common/autotest_common.sh@945 -- # kill 108304 00:10:42.638 22:18:41 -- common/autotest_common.sh@950 -- # wait 108304 00:10:44.542 22:18:43 -- accel/accel.sh@68 -- # trap - ERR 00:10:44.542 22:18:43 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:44.542 22:18:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:44.542 22:18:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.542 22:18:43 -- common/autotest_common.sh@10 -- # set +x 00:10:44.542 22:18:43 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:44.542 22:18:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:44.542 22:18:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.542 22:18:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.542 22:18:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.542 22:18:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.542 22:18:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.542 22:18:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.542 22:18:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.542 22:18:43 -- accel/accel.sh@42 -- # jq -r . 00:10:44.542 22:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.542 22:18:43 -- common/autotest_common.sh@10 -- # set +x 00:10:44.542 22:18:43 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:44.542 22:18:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:44.542 22:18:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.542 22:18:43 -- common/autotest_common.sh@10 -- # set +x 00:10:44.542 ************************************ 00:10:44.542 START TEST accel_missing_filename 00:10:44.542 ************************************ 00:10:44.542 22:18:43 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:44.542 22:18:43 -- common/autotest_common.sh@640 -- # local es=0 00:10:44.542 22:18:43 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:44.542 22:18:43 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:44.542 22:18:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:44.542 22:18:43 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:44.542 22:18:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:44.542 22:18:43 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:44.542 22:18:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:44.542 22:18:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.542 22:18:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.542 22:18:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.542 22:18:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.542 22:18:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.542 22:18:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.542 22:18:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.542 22:18:43 -- accel/accel.sh@42 -- # jq -r . 00:10:44.542 [2024-07-14 22:18:43.324444] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:44.542 [2024-07-14 22:18:43.324802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108402 ] 00:10:44.542 [2024-07-14 22:18:43.493809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.800 [2024-07-14 22:18:43.650805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.800 [2024-07-14 22:18:43.816327] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:45.367 [2024-07-14 22:18:44.204933] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:45.626 A filename is required. 00:10:45.626 22:18:44 -- common/autotest_common.sh@643 -- # es=234 00:10:45.626 22:18:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:45.626 22:18:44 -- common/autotest_common.sh@652 -- # es=106 00:10:45.626 22:18:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:45.626 22:18:44 -- common/autotest_common.sh@660 -- # es=1 00:10:45.626 22:18:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:45.626 00:10:45.626 real 0m1.248s 00:10:45.626 user 0m1.056s 00:10:45.626 sys 0m0.149s 00:10:45.626 22:18:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.626 ************************************ 00:10:45.626 END TEST accel_missing_filename 00:10:45.626 ************************************ 00:10:45.626 22:18:44 -- common/autotest_common.sh@10 -- # set +x 00:10:45.626 22:18:44 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.626 22:18:44 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:45.626 22:18:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.626 22:18:44 -- common/autotest_common.sh@10 -- # set +x 00:10:45.626 ************************************ 00:10:45.626 START TEST accel_compress_verify 00:10:45.626 ************************************ 00:10:45.626 22:18:44 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.626 22:18:44 -- common/autotest_common.sh@640 -- # local es=0 00:10:45.626 22:18:44 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.626 22:18:44 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:45.626 22:18:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:45.626 22:18:44 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:45.626 22:18:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:45.626 22:18:44 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.626 22:18:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.626 22:18:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.626 22:18:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.626 22:18:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.626 22:18:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.626 22:18:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.626 22:18:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.626 22:18:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.626 22:18:44 -- accel/accel.sh@42 -- # jq -r . 00:10:45.626 [2024-07-14 22:18:44.625446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:45.626 [2024-07-14 22:18:44.625674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108441 ] 00:10:45.884 [2024-07-14 22:18:44.792124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.143 [2024-07-14 22:18:44.970751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.143 [2024-07-14 22:18:45.143425] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:46.710 [2024-07-14 22:18:45.541735] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:46.969 00:10:46.969 Compression does not support the verify option, aborting. 00:10:46.969 22:18:45 -- common/autotest_common.sh@643 -- # es=161 00:10:46.969 22:18:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:46.969 22:18:45 -- common/autotest_common.sh@652 -- # es=33 00:10:46.969 22:18:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:46.969 22:18:45 -- common/autotest_common.sh@660 -- # es=1 00:10:46.969 22:18:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:46.969 00:10:46.969 real 0m1.286s 00:10:46.969 user 0m1.020s 00:10:46.969 sys 0m0.220s 00:10:46.969 22:18:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.969 ************************************ 00:10:46.969 END TEST accel_compress_verify 00:10:46.969 ************************************ 00:10:46.969 22:18:45 -- common/autotest_common.sh@10 -- # set +x 00:10:46.969 22:18:45 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:46.969 22:18:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:46.969 22:18:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.969 22:18:45 -- common/autotest_common.sh@10 -- # set +x 00:10:46.969 ************************************ 00:10:46.969 START TEST accel_wrong_workload 00:10:46.969 ************************************ 00:10:46.969 22:18:45 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:46.969 22:18:45 -- common/autotest_common.sh@640 -- # local es=0 00:10:46.969 22:18:45 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:46.969 22:18:45 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:46.969 22:18:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:46.969 22:18:45 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:46.969 22:18:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:46.969 22:18:45 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:46.969 22:18:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:46.969 22:18:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.969 22:18:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.969 22:18:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.969 22:18:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.969 22:18:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.969 22:18:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.969 22:18:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.969 22:18:45 -- accel/accel.sh@42 -- # jq -r . 00:10:46.969 Unsupported workload type: foobar 00:10:46.969 [2024-07-14 22:18:45.961680] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:46.969 accel_perf options: 00:10:46.969 [-h help message] 00:10:46.969 [-q queue depth per core] 00:10:46.969 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:46.969 [-T number of threads per core 00:10:46.969 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:46.969 [-t time in seconds] 00:10:46.969 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:46.969 [ dif_verify, , dif_generate, dif_generate_copy 00:10:46.969 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:46.969 [-l for compress/decompress workloads, name of uncompressed input file 00:10:46.969 [-S for crc32c workload, use this seed value (default 0) 00:10:46.969 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:46.969 [-f for fill workload, use this BYTE value (default 255) 00:10:46.969 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:46.969 [-y verify result if this switch is on] 00:10:46.969 [-a tasks to allocate per core (default: same value as -q)] 00:10:46.969 Can be used to spread operations across a wider range of memory. 00:10:46.969 22:18:45 -- common/autotest_common.sh@643 -- # es=1 00:10:46.969 22:18:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:46.969 22:18:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:46.969 22:18:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:46.969 00:10:46.969 real 0m0.066s 00:10:46.969 user 0m0.081s 00:10:46.969 sys 0m0.040s 00:10:46.969 22:18:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.969 22:18:45 -- common/autotest_common.sh@10 -- # set +x 00:10:46.969 ************************************ 00:10:46.969 END TEST accel_wrong_workload 00:10:46.969 ************************************ 00:10:46.969 22:18:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:46.969 22:18:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:46.969 22:18:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.969 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 ************************************ 00:10:47.228 START TEST accel_negative_buffers 00:10:47.228 ************************************ 00:10:47.228 22:18:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:47.228 22:18:46 -- common/autotest_common.sh@640 -- # local es=0 00:10:47.228 22:18:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:47.228 22:18:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:47.228 22:18:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.228 22:18:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:47.228 22:18:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.228 22:18:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:47.228 22:18:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:47.228 22:18:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.228 22:18:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.228 22:18:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.229 22:18:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.229 22:18:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.229 22:18:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.229 22:18:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.229 22:18:46 -- accel/accel.sh@42 -- # jq -r . 00:10:47.229 -x option must be non-negative. 00:10:47.229 [2024-07-14 22:18:46.074867] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:47.229 accel_perf options: 00:10:47.229 [-h help message] 00:10:47.229 [-q queue depth per core] 00:10:47.229 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:47.229 [-T number of threads per core 00:10:47.229 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:47.229 [-t time in seconds] 00:10:47.229 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:47.229 [ dif_verify, , dif_generate, dif_generate_copy 00:10:47.229 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:47.229 [-l for compress/decompress workloads, name of uncompressed input file 00:10:47.229 [-S for crc32c workload, use this seed value (default 0) 00:10:47.229 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:47.229 [-f for fill workload, use this BYTE value (default 255) 00:10:47.229 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:47.229 [-y verify result if this switch is on] 00:10:47.229 [-a tasks to allocate per core (default: same value as -q)] 00:10:47.229 Can be used to spread operations across a wider range of memory. 00:10:47.229 22:18:46 -- common/autotest_common.sh@643 -- # es=1 00:10:47.229 22:18:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:47.229 22:18:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:47.229 22:18:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:47.229 00:10:47.229 real 0m0.060s 00:10:47.229 user 0m0.026s 00:10:47.229 sys 0m0.034s 00:10:47.229 22:18:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.229 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:10:47.229 ************************************ 00:10:47.229 END TEST accel_negative_buffers 00:10:47.229 ************************************ 00:10:47.229 22:18:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:47.229 22:18:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:47.229 22:18:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.229 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:10:47.229 ************************************ 00:10:47.229 START TEST accel_crc32c 00:10:47.229 ************************************ 00:10:47.229 22:18:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:47.229 22:18:46 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.229 22:18:46 -- accel/accel.sh@17 -- # local accel_module 00:10:47.229 22:18:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:47.229 22:18:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:47.229 22:18:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.229 22:18:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.229 22:18:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.229 22:18:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.229 22:18:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.229 22:18:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.229 22:18:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.229 22:18:46 -- accel/accel.sh@42 -- # jq -r . 00:10:47.229 [2024-07-14 22:18:46.197419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:47.229 [2024-07-14 22:18:46.197597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108556 ] 00:10:47.487 [2024-07-14 22:18:46.366212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.745 [2024-07-14 22:18:46.557052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.646 22:18:48 -- accel/accel.sh@18 -- # out=' 00:10:49.646 SPDK Configuration: 00:10:49.646 Core mask: 0x1 00:10:49.646 00:10:49.646 Accel Perf Configuration: 00:10:49.646 Workload Type: crc32c 00:10:49.646 CRC-32C seed: 32 00:10:49.646 Transfer size: 4096 bytes 00:10:49.646 Vector count 1 00:10:49.646 Module: software 00:10:49.646 Queue depth: 32 00:10:49.646 Allocate depth: 32 00:10:49.646 # threads/core: 1 00:10:49.646 Run time: 1 seconds 00:10:49.646 Verify: Yes 00:10:49.646 00:10:49.646 Running for 1 seconds... 00:10:49.646 00:10:49.646 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.646 ------------------------------------------------------------------------------------ 00:10:49.646 0,0 520608/s 2033 MiB/s 0 0 00:10:49.646 ==================================================================================== 00:10:49.646 Total 520608/s 2033 MiB/s 0 0' 00:10:49.646 22:18:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:49.646 22:18:48 -- accel/accel.sh@20 -- # IFS=: 00:10:49.646 22:18:48 -- accel/accel.sh@20 -- # read -r var val 00:10:49.646 22:18:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:49.646 22:18:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.646 22:18:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.646 22:18:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.646 22:18:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.646 22:18:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.646 22:18:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.646 22:18:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.646 22:18:48 -- accel/accel.sh@42 -- # jq -r . 00:10:49.646 [2024-07-14 22:18:48.492165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:49.646 [2024-07-14 22:18:48.492383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108590 ] 00:10:49.646 [2024-07-14 22:18:48.657206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.905 [2024-07-14 22:18:48.838165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=0x1 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=crc32c 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=32 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=software 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@23 -- # accel_module=software 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=32 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=32 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=1 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val=Yes 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:50.164 22:18:49 -- accel/accel.sh@21 -- # val= 00:10:50.164 22:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # IFS=: 00:10:50.164 22:18:49 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 22:18:50 -- accel/accel.sh@21 -- # val= 00:10:52.066 22:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # IFS=: 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 22:18:50 -- accel/accel.sh@21 -- # val= 00:10:52.066 22:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # IFS=: 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 22:18:50 -- accel/accel.sh@21 -- # val= 00:10:52.066 22:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # IFS=: 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 22:18:50 -- accel/accel.sh@21 -- # val= 00:10:52.066 22:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # IFS=: 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 22:18:50 -- accel/accel.sh@21 -- # val= 00:10:52.066 22:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # IFS=: 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 22:18:50 -- accel/accel.sh@21 -- # val= 00:10:52.066 22:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # IFS=: 00:10:52.066 22:18:50 -- accel/accel.sh@20 -- # read -r var val 00:10:52.066 ************************************ 00:10:52.066 END TEST accel_crc32c 00:10:52.066 ************************************ 00:10:52.066 22:18:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:52.066 22:18:50 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:52.066 22:18:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.066 00:10:52.066 real 0m4.586s 00:10:52.066 user 0m4.061s 00:10:52.066 sys 0m0.371s 00:10:52.066 22:18:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.066 22:18:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.066 22:18:50 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:52.066 22:18:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:52.066 22:18:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.066 22:18:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.066 ************************************ 00:10:52.066 START TEST accel_crc32c_C2 00:10:52.066 ************************************ 00:10:52.066 22:18:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:52.066 22:18:50 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.066 22:18:50 -- accel/accel.sh@17 -- # local accel_module 00:10:52.066 22:18:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:52.066 22:18:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:52.066 22:18:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.066 22:18:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.066 22:18:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.066 22:18:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.066 22:18:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.066 22:18:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.066 22:18:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.066 22:18:50 -- accel/accel.sh@42 -- # jq -r . 00:10:52.066 [2024-07-14 22:18:50.836057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:52.066 [2024-07-14 22:18:50.836416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108643 ] 00:10:52.066 [2024-07-14 22:18:51.005662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.323 [2024-07-14 22:18:51.182393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.221 22:18:53 -- accel/accel.sh@18 -- # out=' 00:10:54.221 SPDK Configuration: 00:10:54.221 Core mask: 0x1 00:10:54.221 00:10:54.221 Accel Perf Configuration: 00:10:54.221 Workload Type: crc32c 00:10:54.221 CRC-32C seed: 0 00:10:54.221 Transfer size: 4096 bytes 00:10:54.221 Vector count 2 00:10:54.221 Module: software 00:10:54.221 Queue depth: 32 00:10:54.221 Allocate depth: 32 00:10:54.221 # threads/core: 1 00:10:54.221 Run time: 1 seconds 00:10:54.221 Verify: Yes 00:10:54.221 00:10:54.221 Running for 1 seconds... 00:10:54.221 00:10:54.221 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:54.221 ------------------------------------------------------------------------------------ 00:10:54.221 0,0 406304/s 3174 MiB/s 0 0 00:10:54.221 ==================================================================================== 00:10:54.221 Total 406304/s 1587 MiB/s 0 0' 00:10:54.221 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.221 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.221 22:18:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:54.221 22:18:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.221 22:18:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:54.221 22:18:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.221 22:18:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.221 22:18:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.221 22:18:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.221 22:18:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.221 22:18:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.221 22:18:53 -- accel/accel.sh@42 -- # jq -r . 00:10:54.221 [2024-07-14 22:18:53.126987] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:54.221 [2024-07-14 22:18:53.127426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108676 ] 00:10:54.478 [2024-07-14 22:18:53.292121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.478 [2024-07-14 22:18:53.467328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=0x1 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=crc32c 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=0 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=software 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=32 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=32 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=1 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val=Yes 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:54.736 22:18:53 -- accel/accel.sh@21 -- # val= 00:10:54.736 22:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # IFS=: 00:10:54.736 22:18:53 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 22:18:55 -- accel/accel.sh@21 -- # val= 00:10:56.703 22:18:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # IFS=: 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 22:18:55 -- accel/accel.sh@21 -- # val= 00:10:56.703 22:18:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # IFS=: 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 22:18:55 -- accel/accel.sh@21 -- # val= 00:10:56.703 22:18:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # IFS=: 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 22:18:55 -- accel/accel.sh@21 -- # val= 00:10:56.703 22:18:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # IFS=: 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 22:18:55 -- accel/accel.sh@21 -- # val= 00:10:56.703 22:18:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # IFS=: 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 22:18:55 -- accel/accel.sh@21 -- # val= 00:10:56.703 22:18:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # IFS=: 00:10:56.703 22:18:55 -- accel/accel.sh@20 -- # read -r var val 00:10:56.703 ************************************ 00:10:56.703 END TEST accel_crc32c_C2 00:10:56.703 ************************************ 00:10:56.703 22:18:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:56.703 22:18:55 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:56.703 22:18:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.703 00:10:56.703 real 0m4.592s 00:10:56.703 user 0m4.106s 00:10:56.703 sys 0m0.350s 00:10:56.703 22:18:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.703 22:18:55 -- common/autotest_common.sh@10 -- # set +x 00:10:56.703 22:18:55 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:56.703 22:18:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:56.703 22:18:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:56.703 22:18:55 -- common/autotest_common.sh@10 -- # set +x 00:10:56.703 ************************************ 00:10:56.703 START TEST accel_copy 00:10:56.703 ************************************ 00:10:56.703 22:18:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:56.703 22:18:55 -- accel/accel.sh@16 -- # local accel_opc 00:10:56.703 22:18:55 -- accel/accel.sh@17 -- # local accel_module 00:10:56.703 22:18:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:56.703 22:18:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:56.703 22:18:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.703 22:18:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.703 22:18:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.703 22:18:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.703 22:18:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.703 22:18:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.703 22:18:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.703 22:18:55 -- accel/accel.sh@42 -- # jq -r . 00:10:56.703 [2024-07-14 22:18:55.483979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:56.703 [2024-07-14 22:18:55.484339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108749 ] 00:10:56.703 [2024-07-14 22:18:55.653911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.961 [2024-07-14 22:18:55.830566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.859 22:18:57 -- accel/accel.sh@18 -- # out=' 00:10:58.859 SPDK Configuration: 00:10:58.859 Core mask: 0x1 00:10:58.859 00:10:58.859 Accel Perf Configuration: 00:10:58.859 Workload Type: copy 00:10:58.859 Transfer size: 4096 bytes 00:10:58.859 Vector count 1 00:10:58.859 Module: software 00:10:58.859 Queue depth: 32 00:10:58.859 Allocate depth: 32 00:10:58.859 # threads/core: 1 00:10:58.859 Run time: 1 seconds 00:10:58.859 Verify: Yes 00:10:58.859 00:10:58.859 Running for 1 seconds... 00:10:58.859 00:10:58.859 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.859 ------------------------------------------------------------------------------------ 00:10:58.859 0,0 317248/s 1239 MiB/s 0 0 00:10:58.859 ==================================================================================== 00:10:58.859 Total 317248/s 1239 MiB/s 0 0' 00:10:58.859 22:18:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.859 22:18:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.859 22:18:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:58.859 22:18:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:58.859 22:18:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.859 22:18:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.859 22:18:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.859 22:18:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.859 22:18:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.859 22:18:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.859 22:18:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.859 22:18:57 -- accel/accel.sh@42 -- # jq -r . 00:10:58.859 [2024-07-14 22:18:57.767955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:58.859 [2024-07-14 22:18:57.769932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108783 ] 00:10:59.117 [2024-07-14 22:18:57.920835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.117 [2024-07-14 22:18:58.118207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=0x1 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=copy 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=software 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@23 -- # accel_module=software 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=32 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=32 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=1 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val=Yes 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:10:59.376 22:18:58 -- accel/accel.sh@21 -- # val= 00:10:59.376 22:18:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # IFS=: 00:10:59.376 22:18:58 -- accel/accel.sh@20 -- # read -r var val 00:11:01.278 22:19:00 -- accel/accel.sh@21 -- # val= 00:11:01.278 22:19:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # IFS=: 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # read -r var val 00:11:01.278 22:19:00 -- accel/accel.sh@21 -- # val= 00:11:01.278 22:19:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # IFS=: 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # read -r var val 00:11:01.278 22:19:00 -- accel/accel.sh@21 -- # val= 00:11:01.278 22:19:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # IFS=: 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # read -r var val 00:11:01.278 22:19:00 -- accel/accel.sh@21 -- # val= 00:11:01.278 22:19:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # IFS=: 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # read -r var val 00:11:01.278 22:19:00 -- accel/accel.sh@21 -- # val= 00:11:01.278 22:19:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # IFS=: 00:11:01.278 22:19:00 -- accel/accel.sh@20 -- # read -r var val 00:11:01.278 22:19:00 -- accel/accel.sh@21 -- # val= 00:11:01.279 22:19:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.279 22:19:00 -- accel/accel.sh@20 -- # IFS=: 00:11:01.279 22:19:00 -- accel/accel.sh@20 -- # read -r var val 00:11:01.279 ************************************ 00:11:01.279 END TEST accel_copy 00:11:01.279 ************************************ 00:11:01.279 22:19:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:01.279 22:19:00 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:11:01.279 22:19:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:01.279 00:11:01.279 real 0m4.594s 00:11:01.279 user 0m4.097s 00:11:01.279 sys 0m0.345s 00:11:01.279 22:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.279 22:19:00 -- common/autotest_common.sh@10 -- # set +x 00:11:01.279 22:19:00 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:01.279 22:19:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:01.279 22:19:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.279 22:19:00 -- common/autotest_common.sh@10 -- # set +x 00:11:01.279 ************************************ 00:11:01.279 START TEST accel_fill 00:11:01.279 ************************************ 00:11:01.279 22:19:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:01.279 22:19:00 -- accel/accel.sh@16 -- # local accel_opc 00:11:01.279 22:19:00 -- accel/accel.sh@17 -- # local accel_module 00:11:01.279 22:19:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:01.279 22:19:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:01.279 22:19:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.279 22:19:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.279 22:19:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.279 22:19:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.279 22:19:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.279 22:19:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.279 22:19:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.279 22:19:00 -- accel/accel.sh@42 -- # jq -r . 00:11:01.279 [2024-07-14 22:19:00.142649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:01.279 [2024-07-14 22:19:00.143042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108830 ] 00:11:01.279 [2024-07-14 22:19:00.312384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.538 [2024-07-14 22:19:00.501838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.470 22:19:02 -- accel/accel.sh@18 -- # out=' 00:11:03.470 SPDK Configuration: 00:11:03.470 Core mask: 0x1 00:11:03.470 00:11:03.470 Accel Perf Configuration: 00:11:03.470 Workload Type: fill 00:11:03.470 Fill pattern: 0x80 00:11:03.470 Transfer size: 4096 bytes 00:11:03.470 Vector count 1 00:11:03.470 Module: software 00:11:03.470 Queue depth: 64 00:11:03.470 Allocate depth: 64 00:11:03.470 # threads/core: 1 00:11:03.470 Run time: 1 seconds 00:11:03.470 Verify: Yes 00:11:03.470 00:11:03.470 Running for 1 seconds... 00:11:03.470 00:11:03.470 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:03.470 ------------------------------------------------------------------------------------ 00:11:03.470 0,0 475008/s 1855 MiB/s 0 0 00:11:03.470 ==================================================================================== 00:11:03.470 Total 475008/s 1855 MiB/s 0 0' 00:11:03.470 22:19:02 -- accel/accel.sh@20 -- # IFS=: 00:11:03.470 22:19:02 -- accel/accel.sh@20 -- # read -r var val 00:11:03.470 22:19:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:03.470 22:19:02 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.470 22:19:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:03.470 22:19:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.470 22:19:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.470 22:19:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.470 22:19:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.470 22:19:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.470 22:19:02 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.470 22:19:02 -- accel/accel.sh@42 -- # jq -r . 00:11:03.470 [2024-07-14 22:19:02.467124] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:03.470 [2024-07-14 22:19:02.467526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108871 ] 00:11:03.730 [2024-07-14 22:19:02.637408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.988 [2024-07-14 22:19:02.821306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.988 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.988 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.988 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.988 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.988 22:19:03 -- accel/accel.sh@21 -- # val=0x1 00:11:03.988 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.988 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.988 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.988 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.988 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.988 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=fill 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@24 -- # accel_opc=fill 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=0x80 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=software 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@23 -- # accel_module=software 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=64 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=64 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=1 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val=Yes 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:03.989 22:19:03 -- accel/accel.sh@21 -- # val= 00:11:03.989 22:19:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # IFS=: 00:11:03.989 22:19:03 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 22:19:04 -- accel/accel.sh@21 -- # val= 00:11:05.892 22:19:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # IFS=: 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 22:19:04 -- accel/accel.sh@21 -- # val= 00:11:05.892 22:19:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # IFS=: 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 22:19:04 -- accel/accel.sh@21 -- # val= 00:11:05.892 22:19:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # IFS=: 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 22:19:04 -- accel/accel.sh@21 -- # val= 00:11:05.892 22:19:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # IFS=: 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 22:19:04 -- accel/accel.sh@21 -- # val= 00:11:05.892 22:19:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # IFS=: 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 22:19:04 -- accel/accel.sh@21 -- # val= 00:11:05.892 22:19:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # IFS=: 00:11:05.892 22:19:04 -- accel/accel.sh@20 -- # read -r var val 00:11:05.892 ************************************ 00:11:05.892 END TEST accel_fill 00:11:05.892 ************************************ 00:11:05.892 22:19:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.892 22:19:04 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:11:05.892 22:19:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.892 00:11:05.892 real 0m4.636s 00:11:05.892 user 0m4.154s 00:11:05.892 sys 0m0.341s 00:11:05.892 22:19:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.892 22:19:04 -- common/autotest_common.sh@10 -- # set +x 00:11:05.892 22:19:04 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:05.892 22:19:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:05.892 22:19:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.892 22:19:04 -- common/autotest_common.sh@10 -- # set +x 00:11:05.892 ************************************ 00:11:05.892 START TEST accel_copy_crc32c 00:11:05.892 ************************************ 00:11:05.892 22:19:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:11:05.892 22:19:04 -- accel/accel.sh@16 -- # local accel_opc 00:11:05.892 22:19:04 -- accel/accel.sh@17 -- # local accel_module 00:11:05.892 22:19:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:05.892 22:19:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:05.892 22:19:04 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.892 22:19:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.893 22:19:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.893 22:19:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.893 22:19:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.893 22:19:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.893 22:19:04 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.893 22:19:04 -- accel/accel.sh@42 -- # jq -r . 00:11:05.893 [2024-07-14 22:19:04.832311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:05.893 [2024-07-14 22:19:04.832670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108918 ] 00:11:06.151 [2024-07-14 22:19:05.001703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.151 [2024-07-14 22:19:05.191875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.051 22:19:07 -- accel/accel.sh@18 -- # out=' 00:11:08.051 SPDK Configuration: 00:11:08.051 Core mask: 0x1 00:11:08.051 00:11:08.051 Accel Perf Configuration: 00:11:08.051 Workload Type: copy_crc32c 00:11:08.051 CRC-32C seed: 0 00:11:08.051 Vector size: 4096 bytes 00:11:08.051 Transfer size: 4096 bytes 00:11:08.051 Vector count 1 00:11:08.051 Module: software 00:11:08.051 Queue depth: 32 00:11:08.051 Allocate depth: 32 00:11:08.051 # threads/core: 1 00:11:08.051 Run time: 1 seconds 00:11:08.051 Verify: Yes 00:11:08.051 00:11:08.051 Running for 1 seconds... 00:11:08.051 00:11:08.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.052 ------------------------------------------------------------------------------------ 00:11:08.052 0,0 253536/s 990 MiB/s 0 0 00:11:08.052 ==================================================================================== 00:11:08.052 Total 253536/s 990 MiB/s 0 0' 00:11:08.052 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.052 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.052 22:19:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:08.052 22:19:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:08.052 22:19:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.052 22:19:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.052 22:19:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.052 22:19:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.052 22:19:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.052 22:19:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.052 22:19:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.052 22:19:07 -- accel/accel.sh@42 -- # jq -r . 00:11:08.309 [2024-07-14 22:19:07.132901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:08.309 [2024-07-14 22:19:07.133452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108972 ] 00:11:08.309 [2024-07-14 22:19:07.302396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.568 [2024-07-14 22:19:07.501862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=0x1 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=0 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=software 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=32 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=32 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=1 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val=Yes 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.827 22:19:07 -- accel/accel.sh@21 -- # val= 00:11:08.827 22:19:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.827 22:19:07 -- accel/accel.sh@20 -- # read -r var val 00:11:10.756 22:19:09 -- accel/accel.sh@21 -- # val= 00:11:10.756 22:19:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.756 22:19:09 -- accel/accel.sh@20 -- # IFS=: 00:11:10.756 22:19:09 -- accel/accel.sh@20 -- # read -r var val 00:11:10.757 22:19:09 -- accel/accel.sh@21 -- # val= 00:11:10.757 22:19:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # IFS=: 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # read -r var val 00:11:10.757 22:19:09 -- accel/accel.sh@21 -- # val= 00:11:10.757 22:19:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # IFS=: 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # read -r var val 00:11:10.757 22:19:09 -- accel/accel.sh@21 -- # val= 00:11:10.757 22:19:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # IFS=: 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # read -r var val 00:11:10.757 22:19:09 -- accel/accel.sh@21 -- # val= 00:11:10.757 22:19:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # IFS=: 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # read -r var val 00:11:10.757 22:19:09 -- accel/accel.sh@21 -- # val= 00:11:10.757 22:19:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # IFS=: 00:11:10.757 22:19:09 -- accel/accel.sh@20 -- # read -r var val 00:11:10.757 ************************************ 00:11:10.757 END TEST accel_copy_crc32c 00:11:10.757 ************************************ 00:11:10.757 22:19:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.757 22:19:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:10.757 22:19:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.757 00:11:10.757 real 0m4.612s 00:11:10.757 user 0m4.154s 00:11:10.757 sys 0m0.321s 00:11:10.757 22:19:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.757 22:19:09 -- common/autotest_common.sh@10 -- # set +x 00:11:10.757 22:19:09 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:10.757 22:19:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:10.757 22:19:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.757 22:19:09 -- common/autotest_common.sh@10 -- # set +x 00:11:10.757 ************************************ 00:11:10.757 START TEST accel_copy_crc32c_C2 00:11:10.757 ************************************ 00:11:10.757 22:19:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:10.757 22:19:09 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.757 22:19:09 -- accel/accel.sh@17 -- # local accel_module 00:11:10.757 22:19:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:10.757 22:19:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:10.757 22:19:09 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.757 22:19:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.757 22:19:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.757 22:19:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.757 22:19:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.757 22:19:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.757 22:19:09 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.757 22:19:09 -- accel/accel.sh@42 -- # jq -r . 00:11:10.757 [2024-07-14 22:19:09.501889] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:10.757 [2024-07-14 22:19:09.502266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109027 ] 00:11:10.757 [2024-07-14 22:19:09.670480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.016 [2024-07-14 22:19:09.863519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.916 22:19:11 -- accel/accel.sh@18 -- # out=' 00:11:12.916 SPDK Configuration: 00:11:12.916 Core mask: 0x1 00:11:12.916 00:11:12.916 Accel Perf Configuration: 00:11:12.916 Workload Type: copy_crc32c 00:11:12.916 CRC-32C seed: 0 00:11:12.916 Vector size: 4096 bytes 00:11:12.916 Transfer size: 8192 bytes 00:11:12.916 Vector count 2 00:11:12.916 Module: software 00:11:12.916 Queue depth: 32 00:11:12.916 Allocate depth: 32 00:11:12.916 # threads/core: 1 00:11:12.916 Run time: 1 seconds 00:11:12.916 Verify: Yes 00:11:12.916 00:11:12.916 Running for 1 seconds... 00:11:12.916 00:11:12.916 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:12.916 ------------------------------------------------------------------------------------ 00:11:12.916 0,0 179904/s 1405 MiB/s 0 0 00:11:12.916 ==================================================================================== 00:11:12.916 Total 179904/s 702 MiB/s 0 0' 00:11:12.916 22:19:11 -- accel/accel.sh@20 -- # IFS=: 00:11:12.916 22:19:11 -- accel/accel.sh@20 -- # read -r var val 00:11:12.916 22:19:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:12.916 22:19:11 -- accel/accel.sh@12 -- # build_accel_config 00:11:12.916 22:19:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:12.916 22:19:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:12.916 22:19:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.916 22:19:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.916 22:19:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:12.916 22:19:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:12.916 22:19:11 -- accel/accel.sh@41 -- # local IFS=, 00:11:12.916 22:19:11 -- accel/accel.sh@42 -- # jq -r . 00:11:12.916 [2024-07-14 22:19:11.806774] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:12.916 [2024-07-14 22:19:11.807202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109065 ] 00:11:13.175 [2024-07-14 22:19:11.977404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.175 [2024-07-14 22:19:12.153906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=0x1 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=0 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val='8192 bytes' 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=software 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@23 -- # accel_module=software 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=32 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=32 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=1 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val=Yes 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:13.434 22:19:12 -- accel/accel.sh@21 -- # val= 00:11:13.434 22:19:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # IFS=: 00:11:13.434 22:19:12 -- accel/accel.sh@20 -- # read -r var val 00:11:15.334 22:19:14 -- accel/accel.sh@21 -- # val= 00:11:15.335 22:19:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # IFS=: 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # read -r var val 00:11:15.335 22:19:14 -- accel/accel.sh@21 -- # val= 00:11:15.335 22:19:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # IFS=: 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # read -r var val 00:11:15.335 22:19:14 -- accel/accel.sh@21 -- # val= 00:11:15.335 22:19:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # IFS=: 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # read -r var val 00:11:15.335 22:19:14 -- accel/accel.sh@21 -- # val= 00:11:15.335 22:19:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # IFS=: 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # read -r var val 00:11:15.335 22:19:14 -- accel/accel.sh@21 -- # val= 00:11:15.335 22:19:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # IFS=: 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # read -r var val 00:11:15.335 22:19:14 -- accel/accel.sh@21 -- # val= 00:11:15.335 22:19:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # IFS=: 00:11:15.335 22:19:14 -- accel/accel.sh@20 -- # read -r var val 00:11:15.335 ************************************ 00:11:15.335 END TEST accel_copy_crc32c_C2 00:11:15.335 ************************************ 00:11:15.335 22:19:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.335 22:19:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:15.335 22:19:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.335 00:11:15.335 real 0m4.605s 00:11:15.335 user 0m4.070s 00:11:15.335 sys 0m0.386s 00:11:15.335 22:19:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.335 22:19:14 -- common/autotest_common.sh@10 -- # set +x 00:11:15.335 22:19:14 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:15.335 22:19:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:15.335 22:19:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.335 22:19:14 -- common/autotest_common.sh@10 -- # set +x 00:11:15.335 ************************************ 00:11:15.335 START TEST accel_dualcast 00:11:15.335 ************************************ 00:11:15.335 22:19:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:11:15.335 22:19:14 -- accel/accel.sh@16 -- # local accel_opc 00:11:15.335 22:19:14 -- accel/accel.sh@17 -- # local accel_module 00:11:15.335 22:19:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:15.335 22:19:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:15.335 22:19:14 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.335 22:19:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.335 22:19:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.335 22:19:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.335 22:19:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.335 22:19:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.335 22:19:14 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.335 22:19:14 -- accel/accel.sh@42 -- # jq -r . 00:11:15.335 [2024-07-14 22:19:14.160420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:15.335 [2024-07-14 22:19:14.160787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109112 ] 00:11:15.335 [2024-07-14 22:19:14.328787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.592 [2024-07-14 22:19:14.505332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.489 22:19:16 -- accel/accel.sh@18 -- # out=' 00:11:17.490 SPDK Configuration: 00:11:17.490 Core mask: 0x1 00:11:17.490 00:11:17.490 Accel Perf Configuration: 00:11:17.490 Workload Type: dualcast 00:11:17.490 Transfer size: 4096 bytes 00:11:17.490 Vector count 1 00:11:17.490 Module: software 00:11:17.490 Queue depth: 32 00:11:17.490 Allocate depth: 32 00:11:17.490 # threads/core: 1 00:11:17.490 Run time: 1 seconds 00:11:17.490 Verify: Yes 00:11:17.490 00:11:17.490 Running for 1 seconds... 00:11:17.490 00:11:17.490 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.490 ------------------------------------------------------------------------------------ 00:11:17.490 0,0 330592/s 1291 MiB/s 0 0 00:11:17.490 ==================================================================================== 00:11:17.490 Total 330592/s 1291 MiB/s 0 0' 00:11:17.490 22:19:16 -- accel/accel.sh@20 -- # IFS=: 00:11:17.490 22:19:16 -- accel/accel.sh@20 -- # read -r var val 00:11:17.490 22:19:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:17.490 22:19:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.490 22:19:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:17.490 22:19:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.490 22:19:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.490 22:19:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.490 22:19:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.490 22:19:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.490 22:19:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.490 22:19:16 -- accel/accel.sh@42 -- # jq -r . 00:11:17.490 [2024-07-14 22:19:16.495536] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:17.490 [2024-07-14 22:19:16.496162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109164 ] 00:11:17.748 [2024-07-14 22:19:16.666901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.006 [2024-07-14 22:19:16.844820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=0x1 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=dualcast 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=software 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@23 -- # accel_module=software 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=32 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=32 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=1 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val=Yes 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:18.006 22:19:17 -- accel/accel.sh@21 -- # val= 00:11:18.006 22:19:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # IFS=: 00:11:18.006 22:19:17 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 22:19:18 -- accel/accel.sh@21 -- # val= 00:11:19.905 22:19:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # IFS=: 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 22:19:18 -- accel/accel.sh@21 -- # val= 00:11:19.905 22:19:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # IFS=: 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 22:19:18 -- accel/accel.sh@21 -- # val= 00:11:19.905 22:19:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # IFS=: 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 22:19:18 -- accel/accel.sh@21 -- # val= 00:11:19.905 22:19:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # IFS=: 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 22:19:18 -- accel/accel.sh@21 -- # val= 00:11:19.905 22:19:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # IFS=: 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 22:19:18 -- accel/accel.sh@21 -- # val= 00:11:19.905 22:19:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # IFS=: 00:11:19.905 22:19:18 -- accel/accel.sh@20 -- # read -r var val 00:11:19.905 ************************************ 00:11:19.905 END TEST accel_dualcast 00:11:19.905 ************************************ 00:11:19.905 22:19:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:19.905 22:19:18 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:19.905 22:19:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.905 00:11:19.905 real 0m4.668s 00:11:19.905 user 0m4.170s 00:11:19.905 sys 0m0.334s 00:11:19.905 22:19:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.905 22:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.905 22:19:18 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:19.905 22:19:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:19.905 22:19:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.905 22:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.905 ************************************ 00:11:19.905 START TEST accel_compare 00:11:19.905 ************************************ 00:11:19.905 22:19:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:19.905 22:19:18 -- accel/accel.sh@16 -- # local accel_opc 00:11:19.905 22:19:18 -- accel/accel.sh@17 -- # local accel_module 00:11:19.905 22:19:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:19.905 22:19:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:19.905 22:19:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.905 22:19:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.905 22:19:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.905 22:19:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.905 22:19:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.905 22:19:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.905 22:19:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.905 22:19:18 -- accel/accel.sh@42 -- # jq -r . 00:11:19.905 [2024-07-14 22:19:18.885705] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:19.905 [2024-07-14 22:19:18.886639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109218 ] 00:11:20.163 [2024-07-14 22:19:19.053976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.421 [2024-07-14 22:19:19.250828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.317 22:19:21 -- accel/accel.sh@18 -- # out=' 00:11:22.317 SPDK Configuration: 00:11:22.317 Core mask: 0x1 00:11:22.317 00:11:22.317 Accel Perf Configuration: 00:11:22.317 Workload Type: compare 00:11:22.317 Transfer size: 4096 bytes 00:11:22.317 Vector count 1 00:11:22.317 Module: software 00:11:22.317 Queue depth: 32 00:11:22.317 Allocate depth: 32 00:11:22.317 # threads/core: 1 00:11:22.317 Run time: 1 seconds 00:11:22.317 Verify: Yes 00:11:22.317 00:11:22.317 Running for 1 seconds... 00:11:22.317 00:11:22.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:22.317 ------------------------------------------------------------------------------------ 00:11:22.317 0,0 463168/s 1809 MiB/s 0 0 00:11:22.317 ==================================================================================== 00:11:22.317 Total 463168/s 1809 MiB/s 0 0' 00:11:22.317 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.317 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.317 22:19:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:22.317 22:19:21 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.317 22:19:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:22.317 22:19:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.317 22:19:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.317 22:19:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.317 22:19:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.317 22:19:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.317 22:19:21 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.317 22:19:21 -- accel/accel.sh@42 -- # jq -r . 00:11:22.317 [2024-07-14 22:19:21.244241] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:22.317 [2024-07-14 22:19:21.244700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109252 ] 00:11:22.574 [2024-07-14 22:19:21.413580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.574 [2024-07-14 22:19:21.616414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=0x1 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=compare 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=software 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@23 -- # accel_module=software 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=32 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=32 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=1 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val=Yes 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:22.832 22:19:21 -- accel/accel.sh@21 -- # val= 00:11:22.832 22:19:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # IFS=: 00:11:22.832 22:19:21 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@21 -- # val= 00:11:24.731 22:19:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # IFS=: 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@21 -- # val= 00:11:24.731 22:19:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # IFS=: 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@21 -- # val= 00:11:24.731 22:19:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # IFS=: 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@21 -- # val= 00:11:24.731 22:19:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # IFS=: 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@21 -- # val= 00:11:24.731 22:19:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # IFS=: 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@21 -- # val= 00:11:24.731 22:19:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # IFS=: 00:11:24.731 22:19:23 -- accel/accel.sh@20 -- # read -r var val 00:11:24.731 22:19:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:24.731 22:19:23 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:24.731 22:19:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:24.731 00:11:24.731 real 0m4.690s 00:11:24.731 user 0m4.160s 00:11:24.731 sys 0m0.380s 00:11:24.731 22:19:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.731 ************************************ 00:11:24.731 END TEST accel_compare 00:11:24.731 ************************************ 00:11:24.731 22:19:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.731 22:19:23 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:24.731 22:19:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:24.731 22:19:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:24.731 22:19:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.731 ************************************ 00:11:24.731 START TEST accel_xor 00:11:24.731 ************************************ 00:11:24.731 22:19:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:24.731 22:19:23 -- accel/accel.sh@16 -- # local accel_opc 00:11:24.731 22:19:23 -- accel/accel.sh@17 -- # local accel_module 00:11:24.731 22:19:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:24.731 22:19:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:24.731 22:19:23 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.731 22:19:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.731 22:19:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.731 22:19:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.731 22:19:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.731 22:19:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.731 22:19:23 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.731 22:19:23 -- accel/accel.sh@42 -- # jq -r . 00:11:24.731 [2024-07-14 22:19:23.630727] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:24.731 [2024-07-14 22:19:23.631688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109299 ] 00:11:24.988 [2024-07-14 22:19:23.797127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.988 [2024-07-14 22:19:23.981000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.888 22:19:25 -- accel/accel.sh@18 -- # out=' 00:11:26.888 SPDK Configuration: 00:11:26.888 Core mask: 0x1 00:11:26.888 00:11:26.888 Accel Perf Configuration: 00:11:26.888 Workload Type: xor 00:11:26.888 Source buffers: 2 00:11:26.888 Transfer size: 4096 bytes 00:11:26.888 Vector count 1 00:11:26.888 Module: software 00:11:26.888 Queue depth: 32 00:11:26.888 Allocate depth: 32 00:11:26.888 # threads/core: 1 00:11:26.888 Run time: 1 seconds 00:11:26.888 Verify: Yes 00:11:26.888 00:11:26.888 Running for 1 seconds... 00:11:26.888 00:11:26.888 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:26.888 ------------------------------------------------------------------------------------ 00:11:26.888 0,0 248512/s 970 MiB/s 0 0 00:11:26.888 ==================================================================================== 00:11:26.888 Total 248512/s 970 MiB/s 0 0' 00:11:26.888 22:19:25 -- accel/accel.sh@20 -- # IFS=: 00:11:26.888 22:19:25 -- accel/accel.sh@20 -- # read -r var val 00:11:26.888 22:19:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:26.888 22:19:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:26.888 22:19:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.888 22:19:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.888 22:19:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.888 22:19:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.888 22:19:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.888 22:19:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.888 22:19:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.888 22:19:25 -- accel/accel.sh@42 -- # jq -r . 00:11:26.888 [2024-07-14 22:19:25.927210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:26.888 [2024-07-14 22:19:25.927655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109344 ] 00:11:27.147 [2024-07-14 22:19:26.097174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.406 [2024-07-14 22:19:26.287206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=0x1 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=xor 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=2 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=software 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@23 -- # accel_module=software 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=32 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=32 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=1 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val=Yes 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:27.665 22:19:26 -- accel/accel.sh@21 -- # val= 00:11:27.665 22:19:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # IFS=: 00:11:27.665 22:19:26 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 22:19:28 -- accel/accel.sh@21 -- # val= 00:11:29.568 22:19:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # IFS=: 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 22:19:28 -- accel/accel.sh@21 -- # val= 00:11:29.568 22:19:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # IFS=: 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 22:19:28 -- accel/accel.sh@21 -- # val= 00:11:29.568 22:19:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # IFS=: 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 22:19:28 -- accel/accel.sh@21 -- # val= 00:11:29.568 22:19:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # IFS=: 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 22:19:28 -- accel/accel.sh@21 -- # val= 00:11:29.568 22:19:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # IFS=: 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 22:19:28 -- accel/accel.sh@21 -- # val= 00:11:29.568 22:19:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # IFS=: 00:11:29.568 22:19:28 -- accel/accel.sh@20 -- # read -r var val 00:11:29.568 ************************************ 00:11:29.568 END TEST accel_xor 00:11:29.568 ************************************ 00:11:29.568 22:19:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:29.568 22:19:28 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:29.568 22:19:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.568 00:11:29.568 real 0m4.598s 00:11:29.568 user 0m4.042s 00:11:29.568 sys 0m0.381s 00:11:29.568 22:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.568 22:19:28 -- common/autotest_common.sh@10 -- # set +x 00:11:29.568 22:19:28 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:29.568 22:19:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:29.568 22:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.568 22:19:28 -- common/autotest_common.sh@10 -- # set +x 00:11:29.568 ************************************ 00:11:29.568 START TEST accel_xor 00:11:29.568 ************************************ 00:11:29.568 22:19:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:29.568 22:19:28 -- accel/accel.sh@16 -- # local accel_opc 00:11:29.568 22:19:28 -- accel/accel.sh@17 -- # local accel_module 00:11:29.568 22:19:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:29.568 22:19:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:29.568 22:19:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:29.568 22:19:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:29.568 22:19:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.568 22:19:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.568 22:19:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:29.568 22:19:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:29.568 22:19:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:29.568 22:19:28 -- accel/accel.sh@42 -- # jq -r . 00:11:29.568 [2024-07-14 22:19:28.277034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:29.568 [2024-07-14 22:19:28.277378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109408 ] 00:11:29.568 [2024-07-14 22:19:28.441636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.826 [2024-07-14 22:19:28.639022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.728 22:19:30 -- accel/accel.sh@18 -- # out=' 00:11:31.728 SPDK Configuration: 00:11:31.728 Core mask: 0x1 00:11:31.728 00:11:31.728 Accel Perf Configuration: 00:11:31.728 Workload Type: xor 00:11:31.728 Source buffers: 3 00:11:31.728 Transfer size: 4096 bytes 00:11:31.728 Vector count 1 00:11:31.728 Module: software 00:11:31.728 Queue depth: 32 00:11:31.728 Allocate depth: 32 00:11:31.728 # threads/core: 1 00:11:31.728 Run time: 1 seconds 00:11:31.728 Verify: Yes 00:11:31.728 00:11:31.728 Running for 1 seconds... 00:11:31.728 00:11:31.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:31.728 ------------------------------------------------------------------------------------ 00:11:31.728 0,0 238656/s 932 MiB/s 0 0 00:11:31.728 ==================================================================================== 00:11:31.728 Total 238656/s 932 MiB/s 0 0' 00:11:31.728 22:19:30 -- accel/accel.sh@20 -- # IFS=: 00:11:31.728 22:19:30 -- accel/accel.sh@20 -- # read -r var val 00:11:31.728 22:19:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:31.728 22:19:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:31.728 22:19:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:31.728 22:19:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:31.728 22:19:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.728 22:19:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.729 22:19:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:31.729 22:19:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:31.729 22:19:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:31.729 22:19:30 -- accel/accel.sh@42 -- # jq -r . 00:11:31.729 [2024-07-14 22:19:30.592574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:31.729 [2024-07-14 22:19:30.592974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109440 ] 00:11:31.729 [2024-07-14 22:19:30.761686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.986 [2024-07-14 22:19:30.942683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.243 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.243 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.243 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.243 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.243 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.243 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=0x1 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=xor 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=3 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=software 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@23 -- # accel_module=software 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=32 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=32 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=1 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val=Yes 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:32.244 22:19:31 -- accel/accel.sh@21 -- # val= 00:11:32.244 22:19:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # IFS=: 00:11:32.244 22:19:31 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 22:19:32 -- accel/accel.sh@21 -- # val= 00:11:34.140 22:19:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # IFS=: 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 22:19:32 -- accel/accel.sh@21 -- # val= 00:11:34.140 22:19:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # IFS=: 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 22:19:32 -- accel/accel.sh@21 -- # val= 00:11:34.140 22:19:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # IFS=: 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 22:19:32 -- accel/accel.sh@21 -- # val= 00:11:34.140 22:19:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # IFS=: 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 22:19:32 -- accel/accel.sh@21 -- # val= 00:11:34.140 22:19:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # IFS=: 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 22:19:32 -- accel/accel.sh@21 -- # val= 00:11:34.140 22:19:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # IFS=: 00:11:34.140 22:19:32 -- accel/accel.sh@20 -- # read -r var val 00:11:34.140 ************************************ 00:11:34.140 END TEST accel_xor 00:11:34.140 ************************************ 00:11:34.140 22:19:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:34.140 22:19:32 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:34.140 22:19:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:34.140 00:11:34.140 real 0m4.639s 00:11:34.140 user 0m4.134s 00:11:34.140 sys 0m0.355s 00:11:34.140 22:19:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.140 22:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:34.140 22:19:32 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:34.140 22:19:32 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:34.140 22:19:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.140 22:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:34.140 ************************************ 00:11:34.140 START TEST accel_dif_verify 00:11:34.140 ************************************ 00:11:34.140 22:19:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:34.140 22:19:32 -- accel/accel.sh@16 -- # local accel_opc 00:11:34.140 22:19:32 -- accel/accel.sh@17 -- # local accel_module 00:11:34.140 22:19:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:34.140 22:19:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:34.140 22:19:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.140 22:19:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.140 22:19:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.140 22:19:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.140 22:19:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.140 22:19:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.140 22:19:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.140 22:19:32 -- accel/accel.sh@42 -- # jq -r . 00:11:34.140 [2024-07-14 22:19:32.975904] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:34.140 [2024-07-14 22:19:32.976380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109494 ] 00:11:34.140 [2024-07-14 22:19:33.146196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.408 [2024-07-14 22:19:33.347114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.367 22:19:35 -- accel/accel.sh@18 -- # out=' 00:11:36.367 SPDK Configuration: 00:11:36.367 Core mask: 0x1 00:11:36.367 00:11:36.367 Accel Perf Configuration: 00:11:36.367 Workload Type: dif_verify 00:11:36.367 Vector size: 4096 bytes 00:11:36.367 Transfer size: 4096 bytes 00:11:36.367 Block size: 512 bytes 00:11:36.367 Metadata size: 8 bytes 00:11:36.367 Vector count 1 00:11:36.367 Module: software 00:11:36.367 Queue depth: 32 00:11:36.367 Allocate depth: 32 00:11:36.367 # threads/core: 1 00:11:36.367 Run time: 1 seconds 00:11:36.367 Verify: No 00:11:36.367 00:11:36.367 Running for 1 seconds... 00:11:36.367 00:11:36.367 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.367 ------------------------------------------------------------------------------------ 00:11:36.367 0,0 111328/s 441 MiB/s 0 0 00:11:36.367 ==================================================================================== 00:11:36.367 Total 111328/s 434 MiB/s 0 0' 00:11:36.367 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.367 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.367 22:19:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:36.367 22:19:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.367 22:19:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:36.367 22:19:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.367 22:19:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.367 22:19:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.367 22:19:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.367 22:19:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.367 22:19:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.367 22:19:35 -- accel/accel.sh@42 -- # jq -r . 00:11:36.367 [2024-07-14 22:19:35.325695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:36.367 [2024-07-14 22:19:35.326090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109528 ] 00:11:36.625 [2024-07-14 22:19:35.493680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.883 [2024-07-14 22:19:35.690883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=0x1 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=dif_verify 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=software 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@23 -- # accel_module=software 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=32 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=32 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=1 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val=No 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:36.883 22:19:35 -- accel/accel.sh@21 -- # val= 00:11:36.883 22:19:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # IFS=: 00:11:36.883 22:19:35 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 22:19:37 -- accel/accel.sh@21 -- # val= 00:11:38.785 22:19:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # IFS=: 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 22:19:37 -- accel/accel.sh@21 -- # val= 00:11:38.785 22:19:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # IFS=: 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 22:19:37 -- accel/accel.sh@21 -- # val= 00:11:38.785 22:19:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # IFS=: 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 22:19:37 -- accel/accel.sh@21 -- # val= 00:11:38.785 22:19:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # IFS=: 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 22:19:37 -- accel/accel.sh@21 -- # val= 00:11:38.785 22:19:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # IFS=: 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 22:19:37 -- accel/accel.sh@21 -- # val= 00:11:38.785 22:19:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # IFS=: 00:11:38.785 22:19:37 -- accel/accel.sh@20 -- # read -r var val 00:11:38.785 ************************************ 00:11:38.785 END TEST accel_dif_verify 00:11:38.785 ************************************ 00:11:38.785 22:19:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:38.785 22:19:37 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:38.785 22:19:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:38.785 00:11:38.785 real 0m4.688s 00:11:38.785 user 0m4.152s 00:11:38.785 sys 0m0.401s 00:11:38.785 22:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.785 22:19:37 -- common/autotest_common.sh@10 -- # set +x 00:11:38.785 22:19:37 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:38.785 22:19:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:38.785 22:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.785 22:19:37 -- common/autotest_common.sh@10 -- # set +x 00:11:38.785 ************************************ 00:11:38.785 START TEST accel_dif_generate 00:11:38.785 ************************************ 00:11:38.785 22:19:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:38.785 22:19:37 -- accel/accel.sh@16 -- # local accel_opc 00:11:38.785 22:19:37 -- accel/accel.sh@17 -- # local accel_module 00:11:38.785 22:19:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:38.785 22:19:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:38.785 22:19:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:38.785 22:19:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:38.785 22:19:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:38.785 22:19:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:38.785 22:19:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:38.785 22:19:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:38.785 22:19:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:38.785 22:19:37 -- accel/accel.sh@42 -- # jq -r . 00:11:38.785 [2024-07-14 22:19:37.717496] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:38.785 [2024-07-14 22:19:37.717858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109592 ] 00:11:39.044 [2024-07-14 22:19:37.885593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.044 [2024-07-14 22:19:38.088230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.576 22:19:40 -- accel/accel.sh@18 -- # out=' 00:11:41.576 SPDK Configuration: 00:11:41.576 Core mask: 0x1 00:11:41.576 00:11:41.576 Accel Perf Configuration: 00:11:41.576 Workload Type: dif_generate 00:11:41.576 Vector size: 4096 bytes 00:11:41.576 Transfer size: 4096 bytes 00:11:41.576 Block size: 512 bytes 00:11:41.576 Metadata size: 8 bytes 00:11:41.576 Vector count 1 00:11:41.576 Module: software 00:11:41.576 Queue depth: 32 00:11:41.576 Allocate depth: 32 00:11:41.576 # threads/core: 1 00:11:41.576 Run time: 1 seconds 00:11:41.576 Verify: No 00:11:41.576 00:11:41.576 Running for 1 seconds... 00:11:41.576 00:11:41.576 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:41.576 ------------------------------------------------------------------------------------ 00:11:41.576 0,0 139360/s 552 MiB/s 0 0 00:11:41.576 ==================================================================================== 00:11:41.576 Total 139360/s 544 MiB/s 0 0' 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.576 22:19:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:41.576 22:19:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:41.576 22:19:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.576 22:19:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.576 22:19:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.576 22:19:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.576 22:19:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.576 22:19:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.576 22:19:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.576 22:19:40 -- accel/accel.sh@42 -- # jq -r . 00:11:41.576 [2024-07-14 22:19:40.050033] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:41.576 [2024-07-14 22:19:40.050436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109626 ] 00:11:41.576 [2024-07-14 22:19:40.217362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.576 [2024-07-14 22:19:40.415419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.576 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.576 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.576 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.576 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.576 22:19:40 -- accel/accel.sh@21 -- # val=0x1 00:11:41.576 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.576 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val=dif_generate 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val=software 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@23 -- # accel_module=software 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val=32 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val=32 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val=1 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val=No 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:41.577 22:19:40 -- accel/accel.sh@21 -- # val= 00:11:41.577 22:19:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # IFS=: 00:11:41.577 22:19:40 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 22:19:42 -- accel/accel.sh@21 -- # val= 00:11:43.478 22:19:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # IFS=: 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 22:19:42 -- accel/accel.sh@21 -- # val= 00:11:43.478 22:19:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # IFS=: 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 22:19:42 -- accel/accel.sh@21 -- # val= 00:11:43.478 22:19:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # IFS=: 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 22:19:42 -- accel/accel.sh@21 -- # val= 00:11:43.478 22:19:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # IFS=: 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 22:19:42 -- accel/accel.sh@21 -- # val= 00:11:43.478 22:19:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # IFS=: 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 22:19:42 -- accel/accel.sh@21 -- # val= 00:11:43.478 22:19:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # IFS=: 00:11:43.478 22:19:42 -- accel/accel.sh@20 -- # read -r var val 00:11:43.478 ************************************ 00:11:43.478 END TEST accel_dif_generate 00:11:43.478 ************************************ 00:11:43.478 22:19:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:43.478 22:19:42 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:43.478 22:19:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:43.479 00:11:43.479 real 0m4.669s 00:11:43.479 user 0m4.175s 00:11:43.479 sys 0m0.350s 00:11:43.479 22:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.479 22:19:42 -- common/autotest_common.sh@10 -- # set +x 00:11:43.479 22:19:42 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:43.479 22:19:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:43.479 22:19:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.479 22:19:42 -- common/autotest_common.sh@10 -- # set +x 00:11:43.479 ************************************ 00:11:43.479 START TEST accel_dif_generate_copy 00:11:43.479 ************************************ 00:11:43.479 22:19:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:43.479 22:19:42 -- accel/accel.sh@16 -- # local accel_opc 00:11:43.479 22:19:42 -- accel/accel.sh@17 -- # local accel_module 00:11:43.479 22:19:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:43.479 22:19:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:43.479 22:19:42 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.479 22:19:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:43.479 22:19:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.479 22:19:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.479 22:19:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:43.479 22:19:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:43.479 22:19:42 -- accel/accel.sh@41 -- # local IFS=, 00:11:43.479 22:19:42 -- accel/accel.sh@42 -- # jq -r . 00:11:43.479 [2024-07-14 22:19:42.437997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:43.479 [2024-07-14 22:19:42.438339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109681 ] 00:11:43.738 [2024-07-14 22:19:42.592486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.738 [2024-07-14 22:19:42.795889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.672 22:19:44 -- accel/accel.sh@18 -- # out=' 00:11:45.672 SPDK Configuration: 00:11:45.672 Core mask: 0x1 00:11:45.672 00:11:45.672 Accel Perf Configuration: 00:11:45.672 Workload Type: dif_generate_copy 00:11:45.672 Vector size: 4096 bytes 00:11:45.672 Transfer size: 4096 bytes 00:11:45.672 Vector count 1 00:11:45.672 Module: software 00:11:45.672 Queue depth: 32 00:11:45.672 Allocate depth: 32 00:11:45.672 # threads/core: 1 00:11:45.672 Run time: 1 seconds 00:11:45.672 Verify: No 00:11:45.672 00:11:45.672 Running for 1 seconds... 00:11:45.672 00:11:45.672 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:45.672 ------------------------------------------------------------------------------------ 00:11:45.672 0,0 106752/s 423 MiB/s 0 0 00:11:45.672 ==================================================================================== 00:11:45.672 Total 106752/s 417 MiB/s 0 0' 00:11:45.672 22:19:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:45.672 22:19:44 -- accel/accel.sh@20 -- # IFS=: 00:11:45.672 22:19:44 -- accel/accel.sh@20 -- # read -r var val 00:11:45.672 22:19:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:45.672 22:19:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.672 22:19:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.672 22:19:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.672 22:19:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.672 22:19:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.672 22:19:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.672 22:19:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.672 22:19:44 -- accel/accel.sh@42 -- # jq -r . 00:11:45.930 [2024-07-14 22:19:44.753740] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:45.930 [2024-07-14 22:19:44.754116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109719 ] 00:11:45.930 [2024-07-14 22:19:44.922076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.188 [2024-07-14 22:19:45.117144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=0x1 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=software 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@23 -- # accel_module=software 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=32 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=32 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=1 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val=No 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:46.454 22:19:45 -- accel/accel.sh@21 -- # val= 00:11:46.454 22:19:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # IFS=: 00:11:46.454 22:19:45 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 22:19:47 -- accel/accel.sh@21 -- # val= 00:11:48.384 22:19:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # IFS=: 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 22:19:47 -- accel/accel.sh@21 -- # val= 00:11:48.384 22:19:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # IFS=: 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 22:19:47 -- accel/accel.sh@21 -- # val= 00:11:48.384 22:19:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # IFS=: 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 22:19:47 -- accel/accel.sh@21 -- # val= 00:11:48.384 22:19:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # IFS=: 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 22:19:47 -- accel/accel.sh@21 -- # val= 00:11:48.384 22:19:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # IFS=: 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 22:19:47 -- accel/accel.sh@21 -- # val= 00:11:48.384 22:19:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # IFS=: 00:11:48.384 22:19:47 -- accel/accel.sh@20 -- # read -r var val 00:11:48.384 ************************************ 00:11:48.384 END TEST accel_dif_generate_copy 00:11:48.384 ************************************ 00:11:48.384 22:19:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:48.384 22:19:47 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:48.384 22:19:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:48.384 00:11:48.384 real 0m4.650s 00:11:48.384 user 0m4.143s 00:11:48.384 sys 0m0.362s 00:11:48.384 22:19:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.384 22:19:47 -- common/autotest_common.sh@10 -- # set +x 00:11:48.384 22:19:47 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:48.384 22:19:47 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.384 22:19:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:48.384 22:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.384 22:19:47 -- common/autotest_common.sh@10 -- # set +x 00:11:48.384 ************************************ 00:11:48.384 START TEST accel_comp 00:11:48.384 ************************************ 00:11:48.384 22:19:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.384 22:19:47 -- accel/accel.sh@16 -- # local accel_opc 00:11:48.384 22:19:47 -- accel/accel.sh@17 -- # local accel_module 00:11:48.384 22:19:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.384 22:19:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.384 22:19:47 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.384 22:19:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.384 22:19:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.384 22:19:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.384 22:19:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.384 22:19:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.384 22:19:47 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.384 22:19:47 -- accel/accel.sh@42 -- # jq -r . 00:11:48.384 [2024-07-14 22:19:47.151905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:48.384 [2024-07-14 22:19:47.152314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109784 ] 00:11:48.384 [2024-07-14 22:19:47.321577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.642 [2024-07-14 22:19:47.520774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.547 22:19:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:50.547 00:11:50.547 SPDK Configuration: 00:11:50.547 Core mask: 0x1 00:11:50.547 00:11:50.547 Accel Perf Configuration: 00:11:50.547 Workload Type: compress 00:11:50.547 Transfer size: 4096 bytes 00:11:50.547 Vector count 1 00:11:50.547 Module: software 00:11:50.547 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.547 Queue depth: 32 00:11:50.547 Allocate depth: 32 00:11:50.547 # threads/core: 1 00:11:50.547 Run time: 1 seconds 00:11:50.547 Verify: No 00:11:50.547 00:11:50.547 Running for 1 seconds... 00:11:50.547 00:11:50.547 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:50.547 ------------------------------------------------------------------------------------ 00:11:50.547 0,0 59168/s 246 MiB/s 0 0 00:11:50.547 ==================================================================================== 00:11:50.547 Total 59168/s 231 MiB/s 0 0' 00:11:50.547 22:19:49 -- accel/accel.sh@20 -- # IFS=: 00:11:50.547 22:19:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.547 22:19:49 -- accel/accel.sh@20 -- # read -r var val 00:11:50.547 22:19:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.547 22:19:49 -- accel/accel.sh@12 -- # build_accel_config 00:11:50.547 22:19:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:50.547 22:19:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.547 22:19:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.547 22:19:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:50.547 22:19:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:50.547 22:19:49 -- accel/accel.sh@41 -- # local IFS=, 00:11:50.547 22:19:49 -- accel/accel.sh@42 -- # jq -r . 00:11:50.547 [2024-07-14 22:19:49.490158] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:50.547 [2024-07-14 22:19:49.490799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109827 ] 00:11:50.805 [2024-07-14 22:19:49.661276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.805 [2024-07-14 22:19:49.859572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=0x1 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=compress 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=software 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@23 -- # accel_module=software 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=32 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=32 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=1 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val=No 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:51.064 22:19:50 -- accel/accel.sh@21 -- # val= 00:11:51.064 22:19:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # IFS=: 00:11:51.064 22:19:50 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 22:19:51 -- accel/accel.sh@21 -- # val= 00:11:52.967 22:19:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # IFS=: 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 22:19:51 -- accel/accel.sh@21 -- # val= 00:11:52.967 22:19:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # IFS=: 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 22:19:51 -- accel/accel.sh@21 -- # val= 00:11:52.967 22:19:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # IFS=: 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 22:19:51 -- accel/accel.sh@21 -- # val= 00:11:52.967 22:19:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # IFS=: 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 22:19:51 -- accel/accel.sh@21 -- # val= 00:11:52.967 22:19:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # IFS=: 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 22:19:51 -- accel/accel.sh@21 -- # val= 00:11:52.967 22:19:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # IFS=: 00:11:52.967 22:19:51 -- accel/accel.sh@20 -- # read -r var val 00:11:52.967 ************************************ 00:11:52.967 END TEST accel_comp 00:11:52.967 ************************************ 00:11:52.967 22:19:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:52.967 22:19:51 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:52.967 22:19:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:52.967 00:11:52.967 real 0m4.673s 00:11:52.967 user 0m4.147s 00:11:52.967 sys 0m0.361s 00:11:52.967 22:19:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.967 22:19:51 -- common/autotest_common.sh@10 -- # set +x 00:11:52.967 22:19:51 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:52.967 22:19:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:52.967 22:19:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.967 22:19:51 -- common/autotest_common.sh@10 -- # set +x 00:11:52.967 ************************************ 00:11:52.967 START TEST accel_decomp 00:11:52.967 ************************************ 00:11:52.967 22:19:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:52.967 22:19:51 -- accel/accel.sh@16 -- # local accel_opc 00:11:52.967 22:19:51 -- accel/accel.sh@17 -- # local accel_module 00:11:52.967 22:19:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:52.967 22:19:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:52.967 22:19:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:52.967 22:19:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:52.967 22:19:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.967 22:19:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.967 22:19:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:52.967 22:19:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:52.967 22:19:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:52.967 22:19:51 -- accel/accel.sh@42 -- # jq -r . 00:11:52.967 [2024-07-14 22:19:51.879315] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:52.967 [2024-07-14 22:19:51.879666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109874 ] 00:11:53.226 [2024-07-14 22:19:52.049823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.226 [2024-07-14 22:19:52.261564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.757 22:19:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:55.757 00:11:55.757 SPDK Configuration: 00:11:55.757 Core mask: 0x1 00:11:55.757 00:11:55.757 Accel Perf Configuration: 00:11:55.757 Workload Type: decompress 00:11:55.757 Transfer size: 4096 bytes 00:11:55.757 Vector count 1 00:11:55.757 Module: software 00:11:55.757 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.757 Queue depth: 32 00:11:55.757 Allocate depth: 32 00:11:55.757 # threads/core: 1 00:11:55.757 Run time: 1 seconds 00:11:55.757 Verify: Yes 00:11:55.757 00:11:55.757 Running for 1 seconds... 00:11:55.757 00:11:55.757 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:55.757 ------------------------------------------------------------------------------------ 00:11:55.757 0,0 69184/s 127 MiB/s 0 0 00:11:55.757 ==================================================================================== 00:11:55.757 Total 69184/s 270 MiB/s 0 0' 00:11:55.757 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:55.757 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:55.757 22:19:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:55.757 22:19:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:55.757 22:19:54 -- accel/accel.sh@12 -- # build_accel_config 00:11:55.757 22:19:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:55.757 22:19:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:55.757 22:19:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:55.757 22:19:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:55.757 22:19:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:55.757 22:19:54 -- accel/accel.sh@41 -- # local IFS=, 00:11:55.757 22:19:54 -- accel/accel.sh@42 -- # jq -r . 00:11:55.757 [2024-07-14 22:19:54.287049] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:55.757 [2024-07-14 22:19:54.287570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109908 ] 00:11:55.757 [2024-07-14 22:19:54.457340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.757 [2024-07-14 22:19:54.660952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=0x1 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=decompress 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=software 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@23 -- # accel_module=software 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=32 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=32 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=1 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val=Yes 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:56.015 22:19:54 -- accel/accel.sh@21 -- # val= 00:11:56.015 22:19:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # IFS=: 00:11:56.015 22:19:54 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 22:19:56 -- accel/accel.sh@21 -- # val= 00:11:57.916 22:19:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # IFS=: 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 22:19:56 -- accel/accel.sh@21 -- # val= 00:11:57.916 22:19:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # IFS=: 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 22:19:56 -- accel/accel.sh@21 -- # val= 00:11:57.916 22:19:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # IFS=: 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 22:19:56 -- accel/accel.sh@21 -- # val= 00:11:57.916 22:19:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # IFS=: 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 22:19:56 -- accel/accel.sh@21 -- # val= 00:11:57.916 22:19:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # IFS=: 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 22:19:56 -- accel/accel.sh@21 -- # val= 00:11:57.916 22:19:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # IFS=: 00:11:57.916 22:19:56 -- accel/accel.sh@20 -- # read -r var val 00:11:57.916 ************************************ 00:11:57.916 END TEST accel_decomp 00:11:57.916 ************************************ 00:11:57.916 22:19:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:57.916 22:19:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:57.916 22:19:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.916 00:11:57.916 real 0m4.753s 00:11:57.916 user 0m4.210s 00:11:57.917 sys 0m0.390s 00:11:57.917 22:19:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.917 22:19:56 -- common/autotest_common.sh@10 -- # set +x 00:11:57.917 22:19:56 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:57.917 22:19:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:57.917 22:19:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.917 22:19:56 -- common/autotest_common.sh@10 -- # set +x 00:11:57.917 ************************************ 00:11:57.917 START TEST accel_decmop_full 00:11:57.917 ************************************ 00:11:57.917 22:19:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:57.917 22:19:56 -- accel/accel.sh@16 -- # local accel_opc 00:11:57.917 22:19:56 -- accel/accel.sh@17 -- # local accel_module 00:11:57.917 22:19:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:57.917 22:19:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:57.917 22:19:56 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.917 22:19:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.917 22:19:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.917 22:19:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.917 22:19:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.917 22:19:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.917 22:19:56 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.917 22:19:56 -- accel/accel.sh@42 -- # jq -r . 00:11:57.917 [2024-07-14 22:19:56.693720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:57.917 [2024-07-14 22:19:56.694274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109962 ] 00:11:57.917 [2024-07-14 22:19:56.872508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.175 [2024-07-14 22:19:57.078603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.076 22:19:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:00.076 00:12:00.076 SPDK Configuration: 00:12:00.076 Core mask: 0x1 00:12:00.076 00:12:00.076 Accel Perf Configuration: 00:12:00.076 Workload Type: decompress 00:12:00.076 Transfer size: 111250 bytes 00:12:00.076 Vector count 1 00:12:00.076 Module: software 00:12:00.076 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.076 Queue depth: 32 00:12:00.076 Allocate depth: 32 00:12:00.076 # threads/core: 1 00:12:00.076 Run time: 1 seconds 00:12:00.076 Verify: Yes 00:12:00.076 00:12:00.076 Running for 1 seconds... 00:12:00.076 00:12:00.076 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:00.076 ------------------------------------------------------------------------------------ 00:12:00.076 0,0 5376/s 222 MiB/s 0 0 00:12:00.076 ==================================================================================== 00:12:00.076 Total 5376/s 570 MiB/s 0 0' 00:12:00.076 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.076 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.076 22:19:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:00.076 22:19:59 -- accel/accel.sh@12 -- # build_accel_config 00:12:00.076 22:19:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:00.076 22:19:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:00.076 22:19:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.076 22:19:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.076 22:19:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:00.076 22:19:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:00.076 22:19:59 -- accel/accel.sh@41 -- # local IFS=, 00:12:00.076 22:19:59 -- accel/accel.sh@42 -- # jq -r . 00:12:00.076 [2024-07-14 22:19:59.053649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:00.076 [2024-07-14 22:19:59.054244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110015 ] 00:12:00.335 [2024-07-14 22:19:59.225333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.592 [2024-07-14 22:19:59.414770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val=0x1 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.592 22:19:59 -- accel/accel.sh@21 -- # val=decompress 00:12:00.592 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.592 22:19:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:00.592 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val=software 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@23 -- # accel_module=software 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val=32 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val=32 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val=1 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val=Yes 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:00.593 22:19:59 -- accel/accel.sh@21 -- # val= 00:12:00.593 22:19:59 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # IFS=: 00:12:00.593 22:19:59 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 22:20:01 -- accel/accel.sh@21 -- # val= 00:12:02.569 22:20:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # IFS=: 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 22:20:01 -- accel/accel.sh@21 -- # val= 00:12:02.569 22:20:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # IFS=: 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 22:20:01 -- accel/accel.sh@21 -- # val= 00:12:02.569 22:20:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # IFS=: 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 22:20:01 -- accel/accel.sh@21 -- # val= 00:12:02.569 22:20:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # IFS=: 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 22:20:01 -- accel/accel.sh@21 -- # val= 00:12:02.569 22:20:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # IFS=: 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 22:20:01 -- accel/accel.sh@21 -- # val= 00:12:02.569 22:20:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # IFS=: 00:12:02.569 22:20:01 -- accel/accel.sh@20 -- # read -r var val 00:12:02.569 ************************************ 00:12:02.569 END TEST accel_decmop_full 00:12:02.569 ************************************ 00:12:02.569 22:20:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:02.569 22:20:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:02.569 22:20:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.569 00:12:02.569 real 0m4.706s 00:12:02.569 user 0m4.215s 00:12:02.569 sys 0m0.342s 00:12:02.569 22:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.569 22:20:01 -- common/autotest_common.sh@10 -- # set +x 00:12:02.569 22:20:01 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.569 22:20:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:02.569 22:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.569 22:20:01 -- common/autotest_common.sh@10 -- # set +x 00:12:02.569 ************************************ 00:12:02.569 START TEST accel_decomp_mcore 00:12:02.569 ************************************ 00:12:02.569 22:20:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.569 22:20:01 -- accel/accel.sh@16 -- # local accel_opc 00:12:02.569 22:20:01 -- accel/accel.sh@17 -- # local accel_module 00:12:02.569 22:20:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.569 22:20:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.569 22:20:01 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.569 22:20:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:02.569 22:20:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.569 22:20:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.569 22:20:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:02.569 22:20:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:02.569 22:20:01 -- accel/accel.sh@41 -- # local IFS=, 00:12:02.569 22:20:01 -- accel/accel.sh@42 -- # jq -r . 00:12:02.569 [2024-07-14 22:20:01.445218] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:02.569 [2024-07-14 22:20:01.445583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110066 ] 00:12:02.569 [2024-07-14 22:20:01.628277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.827 [2024-07-14 22:20:01.809911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.827 [2024-07-14 22:20:01.810038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.827 [2024-07-14 22:20:01.810205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.827 [2024-07-14 22:20:01.810207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.355 22:20:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:05.355 00:12:05.355 SPDK Configuration: 00:12:05.355 Core mask: 0xf 00:12:05.355 00:12:05.355 Accel Perf Configuration: 00:12:05.355 Workload Type: decompress 00:12:05.355 Transfer size: 4096 bytes 00:12:05.355 Vector count 1 00:12:05.355 Module: software 00:12:05.355 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.355 Queue depth: 32 00:12:05.355 Allocate depth: 32 00:12:05.355 # threads/core: 1 00:12:05.355 Run time: 1 seconds 00:12:05.355 Verify: Yes 00:12:05.355 00:12:05.355 Running for 1 seconds... 00:12:05.355 00:12:05.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:05.355 ------------------------------------------------------------------------------------ 00:12:05.355 0,0 47200/s 87 MiB/s 0 0 00:12:05.355 3,0 45504/s 83 MiB/s 0 0 00:12:05.355 2,0 47072/s 86 MiB/s 0 0 00:12:05.355 1,0 45536/s 83 MiB/s 0 0 00:12:05.355 ==================================================================================== 00:12:05.355 Total 185312/s 723 MiB/s 0 0' 00:12:05.355 22:20:03 -- accel/accel.sh@20 -- # IFS=: 00:12:05.356 22:20:03 -- accel/accel.sh@20 -- # read -r var val 00:12:05.356 22:20:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:05.356 22:20:03 -- accel/accel.sh@12 -- # build_accel_config 00:12:05.356 22:20:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:05.356 22:20:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:05.356 22:20:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.356 22:20:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.356 22:20:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:05.356 22:20:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:05.356 22:20:03 -- accel/accel.sh@41 -- # local IFS=, 00:12:05.356 22:20:03 -- accel/accel.sh@42 -- # jq -r . 00:12:05.356 [2024-07-14 22:20:03.837754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:05.356 [2024-07-14 22:20:03.838107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110110 ] 00:12:05.356 [2024-07-14 22:20:04.020923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.356 [2024-07-14 22:20:04.225511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.356 [2024-07-14 22:20:04.225616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.356 [2024-07-14 22:20:04.225747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.356 [2024-07-14 22:20:04.225750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=0xf 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=decompress 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=software 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@23 -- # accel_module=software 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=32 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=32 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=1 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val=Yes 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:05.614 22:20:04 -- accel/accel.sh@21 -- # val= 00:12:05.614 22:20:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # IFS=: 00:12:05.614 22:20:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 22:20:06 -- accel/accel.sh@21 -- # val= 00:12:07.513 22:20:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # IFS=: 00:12:07.513 22:20:06 -- accel/accel.sh@20 -- # read -r var val 00:12:07.513 ************************************ 00:12:07.513 END TEST accel_decomp_mcore 00:12:07.513 ************************************ 00:12:07.513 22:20:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:07.513 22:20:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:07.513 22:20:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.513 00:12:07.513 real 0m4.809s 00:12:07.513 user 0m14.106s 00:12:07.513 sys 0m0.447s 00:12:07.513 22:20:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.513 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:12:07.513 22:20:06 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:07.513 22:20:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:07.513 22:20:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.513 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:12:07.513 ************************************ 00:12:07.513 START TEST accel_decomp_full_mcore 00:12:07.513 ************************************ 00:12:07.513 22:20:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:07.513 22:20:06 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.513 22:20:06 -- accel/accel.sh@17 -- # local accel_module 00:12:07.513 22:20:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:07.513 22:20:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:07.513 22:20:06 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.513 22:20:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.514 22:20:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.514 22:20:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.514 22:20:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.514 22:20:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.514 22:20:06 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.514 22:20:06 -- accel/accel.sh@42 -- # jq -r . 00:12:07.514 [2024-07-14 22:20:06.305025] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:07.514 [2024-07-14 22:20:06.305271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110160 ] 00:12:07.514 [2024-07-14 22:20:06.494770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.771 [2024-07-14 22:20:06.687028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.771 [2024-07-14 22:20:06.687141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.771 [2024-07-14 22:20:06.688256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.771 [2024-07-14 22:20:06.688301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.668 22:20:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:09.668 00:12:09.668 SPDK Configuration: 00:12:09.668 Core mask: 0xf 00:12:09.668 00:12:09.668 Accel Perf Configuration: 00:12:09.668 Workload Type: decompress 00:12:09.668 Transfer size: 111250 bytes 00:12:09.668 Vector count 1 00:12:09.668 Module: software 00:12:09.668 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:09.668 Queue depth: 32 00:12:09.668 Allocate depth: 32 00:12:09.668 # threads/core: 1 00:12:09.668 Run time: 1 seconds 00:12:09.668 Verify: Yes 00:12:09.668 00:12:09.668 Running for 1 seconds... 00:12:09.668 00:12:09.668 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:09.668 ------------------------------------------------------------------------------------ 00:12:09.668 0,0 4960/s 204 MiB/s 0 0 00:12:09.668 3,0 4640/s 191 MiB/s 0 0 00:12:09.668 2,0 4960/s 204 MiB/s 0 0 00:12:09.668 1,0 4768/s 196 MiB/s 0 0 00:12:09.668 ==================================================================================== 00:12:09.668 Total 19328/s 2050 MiB/s 0 0' 00:12:09.668 22:20:08 -- accel/accel.sh@20 -- # IFS=: 00:12:09.669 22:20:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:09.669 22:20:08 -- accel/accel.sh@20 -- # read -r var val 00:12:09.669 22:20:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:09.669 22:20:08 -- accel/accel.sh@12 -- # build_accel_config 00:12:09.669 22:20:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:09.669 22:20:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:09.669 22:20:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:09.669 22:20:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:09.669 22:20:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:09.669 22:20:08 -- accel/accel.sh@41 -- # local IFS=, 00:12:09.669 22:20:08 -- accel/accel.sh@42 -- # jq -r . 00:12:09.927 [2024-07-14 22:20:08.750912] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:09.927 [2024-07-14 22:20:08.751120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110224 ] 00:12:09.927 [2024-07-14 22:20:08.934349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.186 [2024-07-14 22:20:09.132920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.186 [2024-07-14 22:20:09.133009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.186 [2024-07-14 22:20:09.133698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.186 [2024-07-14 22:20:09.133637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=0xf 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=decompress 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=software 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@23 -- # accel_module=software 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=32 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=32 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=1 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val=Yes 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.444 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.444 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:10.444 22:20:09 -- accel/accel.sh@21 -- # val= 00:12:10.445 22:20:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.445 22:20:09 -- accel/accel.sh@20 -- # IFS=: 00:12:10.445 22:20:09 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@21 -- # val= 00:12:12.347 22:20:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # IFS=: 00:12:12.347 22:20:11 -- accel/accel.sh@20 -- # read -r var val 00:12:12.347 22:20:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:12.347 22:20:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:12.347 22:20:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.347 00:12:12.347 real 0m4.884s 00:12:12.347 user 0m14.278s 00:12:12.347 sys 0m0.494s 00:12:12.347 22:20:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.347 22:20:11 -- common/autotest_common.sh@10 -- # set +x 00:12:12.347 ************************************ 00:12:12.347 END TEST accel_decomp_full_mcore 00:12:12.347 ************************************ 00:12:12.347 22:20:11 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:12.347 22:20:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:12.347 22:20:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.347 22:20:11 -- common/autotest_common.sh@10 -- # set +x 00:12:12.347 ************************************ 00:12:12.347 START TEST accel_decomp_mthread 00:12:12.347 ************************************ 00:12:12.347 22:20:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:12.347 22:20:11 -- accel/accel.sh@16 -- # local accel_opc 00:12:12.347 22:20:11 -- accel/accel.sh@17 -- # local accel_module 00:12:12.347 22:20:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:12.347 22:20:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:12.347 22:20:11 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.347 22:20:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:12.347 22:20:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.347 22:20:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.347 22:20:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:12.347 22:20:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:12.347 22:20:11 -- accel/accel.sh@41 -- # local IFS=, 00:12:12.348 22:20:11 -- accel/accel.sh@42 -- # jq -r . 00:12:12.348 [2024-07-14 22:20:11.243296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:12.348 [2024-07-14 22:20:11.243495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110274 ] 00:12:12.606 [2024-07-14 22:20:11.413955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.606 [2024-07-14 22:20:11.612717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.135 22:20:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:15.135 00:12:15.135 SPDK Configuration: 00:12:15.135 Core mask: 0x1 00:12:15.135 00:12:15.135 Accel Perf Configuration: 00:12:15.135 Workload Type: decompress 00:12:15.135 Transfer size: 4096 bytes 00:12:15.135 Vector count 1 00:12:15.135 Module: software 00:12:15.135 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:15.135 Queue depth: 32 00:12:15.135 Allocate depth: 32 00:12:15.135 # threads/core: 2 00:12:15.135 Run time: 1 seconds 00:12:15.135 Verify: Yes 00:12:15.135 00:12:15.135 Running for 1 seconds... 00:12:15.135 00:12:15.135 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:15.135 ------------------------------------------------------------------------------------ 00:12:15.135 0,1 33696/s 62 MiB/s 0 0 00:12:15.135 0,0 33536/s 61 MiB/s 0 0 00:12:15.135 ==================================================================================== 00:12:15.135 Total 67232/s 262 MiB/s 0 0' 00:12:15.135 22:20:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:15.135 22:20:13 -- accel/accel.sh@20 -- # IFS=: 00:12:15.135 22:20:13 -- accel/accel.sh@20 -- # read -r var val 00:12:15.135 22:20:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:15.135 22:20:13 -- accel/accel.sh@12 -- # build_accel_config 00:12:15.135 22:20:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:15.135 22:20:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.135 22:20:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.135 22:20:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:15.135 22:20:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:15.135 22:20:13 -- accel/accel.sh@41 -- # local IFS=, 00:12:15.135 22:20:13 -- accel/accel.sh@42 -- # jq -r . 00:12:15.135 [2024-07-14 22:20:13.652656] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:15.135 [2024-07-14 22:20:13.652834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110313 ] 00:12:15.135 [2024-07-14 22:20:13.818588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.135 [2024-07-14 22:20:14.005813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=0x1 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=decompress 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=software 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@23 -- # accel_module=software 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=32 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=32 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=2 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val=Yes 00:12:15.394 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.394 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.394 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.395 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.395 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.395 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:15.395 22:20:14 -- accel/accel.sh@21 -- # val= 00:12:15.395 22:20:14 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.395 22:20:14 -- accel/accel.sh@20 -- # IFS=: 00:12:15.395 22:20:14 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@21 -- # val= 00:12:17.298 22:20:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # IFS=: 00:12:17.298 22:20:15 -- accel/accel.sh@20 -- # read -r var val 00:12:17.298 22:20:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:17.298 22:20:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:17.298 22:20:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:17.298 00:12:17.298 real 0m4.763s 00:12:17.298 user 0m4.259s 00:12:17.298 sys 0m0.372s 00:12:17.298 22:20:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.298 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:12:17.298 ************************************ 00:12:17.298 END TEST accel_decomp_mthread 00:12:17.298 ************************************ 00:12:17.298 22:20:15 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:17.298 22:20:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:17.298 22:20:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.298 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:12:17.298 ************************************ 00:12:17.298 START TEST accel_deomp_full_mthread 00:12:17.298 ************************************ 00:12:17.298 22:20:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:17.298 22:20:16 -- accel/accel.sh@16 -- # local accel_opc 00:12:17.298 22:20:16 -- accel/accel.sh@17 -- # local accel_module 00:12:17.298 22:20:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:17.298 22:20:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:17.298 22:20:16 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.298 22:20:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:17.298 22:20:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.298 22:20:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.298 22:20:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:17.298 22:20:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:17.298 22:20:16 -- accel/accel.sh@41 -- # local IFS=, 00:12:17.298 22:20:16 -- accel/accel.sh@42 -- # jq -r . 00:12:17.298 [2024-07-14 22:20:16.055805] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:17.298 [2024-07-14 22:20:16.056617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110367 ] 00:12:17.298 [2024-07-14 22:20:16.224804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.557 [2024-07-14 22:20:16.396911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.553 22:20:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:19.553 00:12:19.553 SPDK Configuration: 00:12:19.553 Core mask: 0x1 00:12:19.553 00:12:19.553 Accel Perf Configuration: 00:12:19.553 Workload Type: decompress 00:12:19.553 Transfer size: 111250 bytes 00:12:19.553 Vector count 1 00:12:19.553 Module: software 00:12:19.553 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:19.553 Queue depth: 32 00:12:19.553 Allocate depth: 32 00:12:19.553 # threads/core: 2 00:12:19.553 Run time: 1 seconds 00:12:19.553 Verify: Yes 00:12:19.553 00:12:19.553 Running for 1 seconds... 00:12:19.553 00:12:19.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:19.553 ------------------------------------------------------------------------------------ 00:12:19.554 0,1 2656/s 109 MiB/s 0 0 00:12:19.554 0,0 2592/s 107 MiB/s 0 0 00:12:19.554 ==================================================================================== 00:12:19.554 Total 5248/s 556 MiB/s 0 0' 00:12:19.554 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:19.554 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:19.554 22:20:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:19.554 22:20:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:19.554 22:20:18 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.554 22:20:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:19.554 22:20:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.554 22:20:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.554 22:20:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:19.554 22:20:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:19.554 22:20:18 -- accel/accel.sh@41 -- # local IFS=, 00:12:19.554 22:20:18 -- accel/accel.sh@42 -- # jq -r . 00:12:19.554 [2024-07-14 22:20:18.383341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:19.554 [2024-07-14 22:20:18.383545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110419 ] 00:12:19.554 [2024-07-14 22:20:18.536302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.812 [2024-07-14 22:20:18.712421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=0x1 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=decompress 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=software 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@23 -- # accel_module=software 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=32 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=32 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=2 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val=Yes 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:20.071 22:20:18 -- accel/accel.sh@21 -- # val= 00:12:20.071 22:20:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # IFS=: 00:12:20.071 22:20:18 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@21 -- # val= 00:12:21.999 22:20:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # IFS=: 00:12:21.999 22:20:20 -- accel/accel.sh@20 -- # read -r var val 00:12:21.999 22:20:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:21.999 22:20:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:21.999 22:20:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:21.999 00:12:21.999 real 0m4.667s 00:12:21.999 user 0m4.188s 00:12:21.999 sys 0m0.347s 00:12:21.999 22:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.999 22:20:20 -- common/autotest_common.sh@10 -- # set +x 00:12:21.999 ************************************ 00:12:21.999 END TEST accel_deomp_full_mthread 00:12:21.999 ************************************ 00:12:21.999 22:20:20 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:21.999 22:20:20 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:21.999 22:20:20 -- accel/accel.sh@129 -- # build_accel_config 00:12:21.999 22:20:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:21.999 22:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.999 22:20:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:21.999 22:20:20 -- common/autotest_common.sh@10 -- # set +x 00:12:21.999 22:20:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.999 22:20:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.999 22:20:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:21.999 22:20:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:21.999 22:20:20 -- accel/accel.sh@41 -- # local IFS=, 00:12:21.999 22:20:20 -- accel/accel.sh@42 -- # jq -r . 00:12:21.999 ************************************ 00:12:21.999 START TEST accel_dif_functional_tests 00:12:21.999 ************************************ 00:12:21.999 22:20:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:21.999 [2024-07-14 22:20:20.810546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:21.999 [2024-07-14 22:20:20.810753] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110466 ] 00:12:21.999 [2024-07-14 22:20:20.987139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.258 [2024-07-14 22:20:21.169503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.258 [2024-07-14 22:20:21.169656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.258 [2024-07-14 22:20:21.169655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.516 00:12:22.516 00:12:22.516 CUnit - A unit testing framework for C - Version 2.1-3 00:12:22.516 http://cunit.sourceforge.net/ 00:12:22.517 00:12:22.517 00:12:22.517 Suite: accel_dif 00:12:22.517 Test: verify: DIF generated, GUARD check ...passed 00:12:22.517 Test: verify: DIF generated, APPTAG check ...passed 00:12:22.517 Test: verify: DIF generated, REFTAG check ...passed 00:12:22.517 Test: verify: DIF not generated, GUARD check ...[2024-07-14 22:20:21.458862] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:22.517 passed 00:12:22.517 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 22:20:21.459308] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:22.517 [2024-07-14 22:20:21.459562] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:22.517 [2024-07-14 22:20:21.459825] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:22.517 passed 00:12:22.517 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 22:20:21.460014] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:22.517 [2024-07-14 22:20:21.460300] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:22.517 passed 00:12:22.517 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:22.517 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 22:20:21.460621] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:22.517 passed 00:12:22.517 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:22.517 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:22.517 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:22.517 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 22:20:21.461092] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:22.517 passed 00:12:22.517 Test: generate copy: DIF generated, GUARD check ...passed 00:12:22.517 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:22.517 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:22.517 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:22.517 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:22.517 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:22.517 Test: generate copy: iovecs-len validate ...[2024-07-14 22:20:21.461853] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:22.517 passed 00:12:22.517 Test: generate copy: buffer alignment validate ...passed 00:12:22.517 00:12:22.517 Run Summary: Type Total Ran Passed Failed Inactive 00:12:22.517 suites 1 1 n/a 0 0 00:12:22.517 tests 20 20 20 0 0 00:12:22.517 asserts 204 204 204 0 n/a 00:12:22.517 00:12:22.517 Elapsed time = 0.011 seconds 00:12:23.452 00:12:23.452 real 0m1.709s 00:12:23.452 user 0m3.231s 00:12:23.452 sys 0m0.255s 00:12:23.452 22:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.452 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:12:23.452 ************************************ 00:12:23.452 END TEST accel_dif_functional_tests 00:12:23.452 ************************************ 00:12:23.452 00:12:23.452 real 1m42.847s 00:12:23.452 user 1m52.903s 00:12:23.452 sys 0m9.128s 00:12:23.452 ************************************ 00:12:23.452 END TEST accel 00:12:23.452 ************************************ 00:12:23.452 22:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.452 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:12:23.709 22:20:22 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:23.709 22:20:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:23.709 22:20:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.709 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:12:23.709 ************************************ 00:12:23.709 START TEST accel_rpc 00:12:23.709 ************************************ 00:12:23.709 22:20:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:23.709 * Looking for test storage... 00:12:23.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:23.709 22:20:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:23.709 22:20:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=110556 00:12:23.709 22:20:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 110556 00:12:23.709 22:20:22 -- common/autotest_common.sh@819 -- # '[' -z 110556 ']' 00:12:23.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.709 22:20:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.709 22:20:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:23.709 22:20:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.709 22:20:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:23.709 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:12:23.709 22:20:22 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:23.709 [2024-07-14 22:20:22.667031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:23.710 [2024-07-14 22:20:22.667540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110556 ] 00:12:23.968 [2024-07-14 22:20:22.834142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.968 [2024-07-14 22:20:23.002550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:23.968 [2024-07-14 22:20:23.002795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.534 22:20:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:24.534 22:20:23 -- common/autotest_common.sh@852 -- # return 0 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:24.534 22:20:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:24.534 22:20:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.534 22:20:23 -- common/autotest_common.sh@10 -- # set +x 00:12:24.534 ************************************ 00:12:24.534 START TEST accel_assign_opcode 00:12:24.534 ************************************ 00:12:24.534 22:20:23 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:24.534 22:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.534 22:20:23 -- common/autotest_common.sh@10 -- # set +x 00:12:24.534 [2024-07-14 22:20:23.551606] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:24.534 22:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:24.534 22:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.534 22:20:23 -- common/autotest_common.sh@10 -- # set +x 00:12:24.534 [2024-07-14 22:20:23.559561] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:24.534 22:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.534 22:20:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:24.534 22:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.534 22:20:23 -- common/autotest_common.sh@10 -- # set +x 00:12:25.468 22:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.468 22:20:24 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:25.468 22:20:24 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:25.468 22:20:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.468 22:20:24 -- common/autotest_common.sh@10 -- # set +x 00:12:25.468 22:20:24 -- accel/accel_rpc.sh@42 -- # grep software 00:12:25.468 22:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.468 software 00:12:25.468 00:12:25.469 real 0m0.877s 00:12:25.469 user 0m0.069s 00:12:25.469 sys 0m0.002s 00:12:25.469 22:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.469 22:20:24 -- common/autotest_common.sh@10 -- # set +x 00:12:25.469 ************************************ 00:12:25.469 END TEST accel_assign_opcode 00:12:25.469 ************************************ 00:12:25.469 22:20:24 -- accel/accel_rpc.sh@55 -- # killprocess 110556 00:12:25.469 22:20:24 -- common/autotest_common.sh@926 -- # '[' -z 110556 ']' 00:12:25.469 22:20:24 -- common/autotest_common.sh@930 -- # kill -0 110556 00:12:25.469 22:20:24 -- common/autotest_common.sh@931 -- # uname 00:12:25.469 22:20:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:25.469 22:20:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110556 00:12:25.469 22:20:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:25.469 killing process with pid 110556 00:12:25.469 22:20:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:25.469 22:20:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110556' 00:12:25.469 22:20:24 -- common/autotest_common.sh@945 -- # kill 110556 00:12:25.469 22:20:24 -- common/autotest_common.sh@950 -- # wait 110556 00:12:27.998 00:12:27.998 real 0m4.160s 00:12:27.998 user 0m4.139s 00:12:27.998 sys 0m0.474s 00:12:27.998 ************************************ 00:12:27.998 END TEST accel_rpc 00:12:27.998 22:20:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.998 22:20:26 -- common/autotest_common.sh@10 -- # set +x 00:12:27.998 ************************************ 00:12:27.998 22:20:26 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:27.998 22:20:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:27.998 22:20:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.998 22:20:26 -- common/autotest_common.sh@10 -- # set +x 00:12:27.998 ************************************ 00:12:27.998 START TEST app_cmdline 00:12:27.998 ************************************ 00:12:27.998 22:20:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:27.998 * Looking for test storage... 00:12:27.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:27.998 22:20:26 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:27.998 22:20:26 -- app/cmdline.sh@17 -- # spdk_tgt_pid=110686 00:12:27.998 22:20:26 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:27.998 22:20:26 -- app/cmdline.sh@18 -- # waitforlisten 110686 00:12:27.998 22:20:26 -- common/autotest_common.sh@819 -- # '[' -z 110686 ']' 00:12:27.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.998 22:20:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.998 22:20:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:27.998 22:20:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.998 22:20:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:27.998 22:20:26 -- common/autotest_common.sh@10 -- # set +x 00:12:27.998 [2024-07-14 22:20:26.900398] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:27.999 [2024-07-14 22:20:26.901280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110686 ] 00:12:28.257 [2024-07-14 22:20:27.074655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.257 [2024-07-14 22:20:27.281649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.257 [2024-07-14 22:20:27.281917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.634 22:20:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:29.634 22:20:28 -- common/autotest_common.sh@852 -- # return 0 00:12:29.634 22:20:28 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:29.892 { 00:12:29.892 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:12:29.892 "fields": { 00:12:29.892 "major": 24, 00:12:29.892 "minor": 1, 00:12:29.892 "patch": 1, 00:12:29.892 "suffix": "-pre", 00:12:29.892 "commit": "4b94202c6" 00:12:29.892 } 00:12:29.892 } 00:12:29.892 22:20:28 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:29.892 22:20:28 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:29.892 22:20:28 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:29.892 22:20:28 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:29.892 22:20:28 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:29.892 22:20:28 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:29.892 22:20:28 -- app/cmdline.sh@26 -- # sort 00:12:29.892 22:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.892 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:12:29.892 22:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.892 22:20:28 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:29.892 22:20:28 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:29.892 22:20:28 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:29.892 22:20:28 -- common/autotest_common.sh@640 -- # local es=0 00:12:29.892 22:20:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:29.892 22:20:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.893 22:20:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:29.893 22:20:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.893 22:20:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:29.893 22:20:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.893 22:20:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:29.893 22:20:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.893 22:20:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:29.893 22:20:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:30.151 request: 00:12:30.151 { 00:12:30.151 "method": "env_dpdk_get_mem_stats", 00:12:30.151 "req_id": 1 00:12:30.151 } 00:12:30.151 Got JSON-RPC error response 00:12:30.151 response: 00:12:30.151 { 00:12:30.151 "code": -32601, 00:12:30.151 "message": "Method not found" 00:12:30.151 } 00:12:30.151 22:20:29 -- common/autotest_common.sh@643 -- # es=1 00:12:30.151 22:20:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:30.151 22:20:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:30.151 22:20:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:30.151 22:20:29 -- app/cmdline.sh@1 -- # killprocess 110686 00:12:30.151 22:20:29 -- common/autotest_common.sh@926 -- # '[' -z 110686 ']' 00:12:30.151 22:20:29 -- common/autotest_common.sh@930 -- # kill -0 110686 00:12:30.151 22:20:29 -- common/autotest_common.sh@931 -- # uname 00:12:30.151 22:20:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:30.151 22:20:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110686 00:12:30.151 22:20:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:30.151 killing process with pid 110686 00:12:30.151 22:20:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:30.151 22:20:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110686' 00:12:30.151 22:20:29 -- common/autotest_common.sh@945 -- # kill 110686 00:12:30.151 22:20:29 -- common/autotest_common.sh@950 -- # wait 110686 00:12:32.683 00:12:32.683 real 0m4.603s 00:12:32.683 user 0m5.172s 00:12:32.683 sys 0m0.640s 00:12:32.683 22:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.683 ************************************ 00:12:32.683 END TEST app_cmdline 00:12:32.683 ************************************ 00:12:32.683 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:12:32.683 22:20:31 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:32.683 22:20:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:32.683 22:20:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.683 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:12:32.683 ************************************ 00:12:32.683 START TEST version 00:12:32.683 ************************************ 00:12:32.683 22:20:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:32.683 * Looking for test storage... 00:12:32.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:32.683 22:20:31 -- app/version.sh@17 -- # get_header_version major 00:12:32.683 22:20:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:32.683 22:20:31 -- app/version.sh@14 -- # cut -f2 00:12:32.683 22:20:31 -- app/version.sh@14 -- # tr -d '"' 00:12:32.683 22:20:31 -- app/version.sh@17 -- # major=24 00:12:32.683 22:20:31 -- app/version.sh@18 -- # get_header_version minor 00:12:32.683 22:20:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:32.683 22:20:31 -- app/version.sh@14 -- # cut -f2 00:12:32.683 22:20:31 -- app/version.sh@14 -- # tr -d '"' 00:12:32.683 22:20:31 -- app/version.sh@18 -- # minor=1 00:12:32.683 22:20:31 -- app/version.sh@19 -- # get_header_version patch 00:12:32.683 22:20:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:32.683 22:20:31 -- app/version.sh@14 -- # cut -f2 00:12:32.683 22:20:31 -- app/version.sh@14 -- # tr -d '"' 00:12:32.683 22:20:31 -- app/version.sh@19 -- # patch=1 00:12:32.683 22:20:31 -- app/version.sh@20 -- # get_header_version suffix 00:12:32.683 22:20:31 -- app/version.sh@14 -- # cut -f2 00:12:32.683 22:20:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:32.683 22:20:31 -- app/version.sh@14 -- # tr -d '"' 00:12:32.683 22:20:31 -- app/version.sh@20 -- # suffix=-pre 00:12:32.683 22:20:31 -- app/version.sh@22 -- # version=24.1 00:12:32.683 22:20:31 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:32.683 22:20:31 -- app/version.sh@25 -- # version=24.1.1 00:12:32.683 22:20:31 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:32.683 22:20:31 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:32.683 22:20:31 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:32.683 22:20:31 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:32.683 22:20:31 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:32.684 00:12:32.684 real 0m0.138s 00:12:32.684 user 0m0.113s 00:12:32.684 sys 0m0.054s 00:12:32.684 22:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.684 ************************************ 00:12:32.684 END TEST version 00:12:32.684 ************************************ 00:12:32.684 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:12:32.684 22:20:31 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:32.684 22:20:31 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:32.684 22:20:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:32.684 22:20:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.684 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:12:32.684 ************************************ 00:12:32.684 START TEST blockdev_general 00:12:32.684 ************************************ 00:12:32.684 22:20:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:32.684 * Looking for test storage... 00:12:32.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:32.684 22:20:31 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:32.684 22:20:31 -- bdev/nbd_common.sh@6 -- # set -e 00:12:32.684 22:20:31 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:32.684 22:20:31 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:32.684 22:20:31 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:32.684 22:20:31 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:32.684 22:20:31 -- bdev/blockdev.sh@18 -- # : 00:12:32.684 22:20:31 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:32.684 22:20:31 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:32.684 22:20:31 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:32.684 22:20:31 -- bdev/blockdev.sh@672 -- # uname -s 00:12:32.684 22:20:31 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:32.684 22:20:31 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:32.684 22:20:31 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:32.684 22:20:31 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:32.684 22:20:31 -- bdev/blockdev.sh@682 -- # dek= 00:12:32.684 22:20:31 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:32.684 22:20:31 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:32.684 22:20:31 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:32.684 22:20:31 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:32.684 22:20:31 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:32.684 22:20:31 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:32.684 22:20:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=110895 00:12:32.684 22:20:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:32.684 22:20:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:32.684 22:20:31 -- bdev/blockdev.sh@47 -- # waitforlisten 110895 00:12:32.684 22:20:31 -- common/autotest_common.sh@819 -- # '[' -z 110895 ']' 00:12:32.684 22:20:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.684 22:20:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:32.684 22:20:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.684 22:20:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:32.684 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:12:32.684 [2024-07-14 22:20:31.729065] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:32.684 [2024-07-14 22:20:31.729495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110895 ] 00:12:32.942 [2024-07-14 22:20:31.898384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.201 [2024-07-14 22:20:32.106603] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:33.201 [2024-07-14 22:20:32.107177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.767 22:20:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:33.767 22:20:32 -- common/autotest_common.sh@852 -- # return 0 00:12:33.767 22:20:32 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:33.767 22:20:32 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:33.767 22:20:32 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:33.767 22:20:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.767 22:20:32 -- common/autotest_common.sh@10 -- # set +x 00:12:34.333 [2024-07-14 22:20:33.390985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:34.333 [2024-07-14 22:20:33.391341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:34.589 00:12:34.589 [2024-07-14 22:20:33.398978] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:34.589 [2024-07-14 22:20:33.399166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:34.589 00:12:34.589 Malloc0 00:12:34.589 Malloc1 00:12:34.589 Malloc2 00:12:34.589 Malloc3 00:12:34.589 Malloc4 00:12:34.589 Malloc5 00:12:34.859 Malloc6 00:12:34.860 Malloc7 00:12:34.860 Malloc8 00:12:34.860 Malloc9 00:12:34.860 [2024-07-14 22:20:33.766337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:34.860 [2024-07-14 22:20:33.766573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.860 [2024-07-14 22:20:33.766640] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:34.860 [2024-07-14 22:20:33.766763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.860 [2024-07-14 22:20:33.768993] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.860 [2024-07-14 22:20:33.769159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:34.860 TestPT 00:12:34.860 22:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.860 22:20:33 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:34.860 5000+0 records in 00:12:34.860 5000+0 records out 00:12:34.860 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0280737 s, 365 MB/s 00:12:34.860 22:20:33 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:34.860 22:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.860 22:20:33 -- common/autotest_common.sh@10 -- # set +x 00:12:34.860 AIO0 00:12:34.860 22:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.860 22:20:33 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:34.860 22:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.860 22:20:33 -- common/autotest_common.sh@10 -- # set +x 00:12:34.860 22:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.860 22:20:33 -- bdev/blockdev.sh@738 -- # cat 00:12:34.860 22:20:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:34.860 22:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.860 22:20:33 -- common/autotest_common.sh@10 -- # set +x 00:12:34.860 22:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.860 22:20:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:34.860 22:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.860 22:20:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.132 22:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.132 22:20:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:35.132 22:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.132 22:20:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.132 22:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.132 22:20:33 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:35.132 22:20:33 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:35.132 22:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.132 22:20:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.132 22:20:33 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:35.132 22:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.132 22:20:34 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:35.132 22:20:34 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:35.133 22:20:34 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "56fb772f-98cd-4981-8d29-025aa5c96e8e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56fb772f-98cd-4981-8d29-025aa5c96e8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "64c23c71-507a-50d9-b241-1b97b11dee30"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "64c23c71-507a-50d9-b241-1b97b11dee30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "978dfaf6-69b3-570a-9d45-347a0e85093f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "978dfaf6-69b3-570a-9d45-347a0e85093f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f98a8111-9524-5112-8f40-64a2334c7ba0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f98a8111-9524-5112-8f40-64a2334c7ba0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cfa30c1c-d6c4-55c1-8362-82150270ba37"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfa30c1c-d6c4-55c1-8362-82150270ba37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d6947687-9e0b-53fe-8ba6-620acacbaae0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d6947687-9e0b-53fe-8ba6-620acacbaae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8ada1432-a335-5337-a61f-da68cf355761"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8ada1432-a335-5337-a61f-da68cf355761",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5175279d-c7e6-566a-bd8b-11a94dcac5a7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5175279d-c7e6-566a-bd8b-11a94dcac5a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "6617cfa2-0c64-5998-89d8-11d97e64ea58"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6617cfa2-0c64-5998-89d8-11d97e64ea58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "46cca63a-6abd-5cfa-9693-156f07276a37"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "46cca63a-6abd-5cfa-9693-156f07276a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a9f0f887-42d2-5a4e-a584-fbb9d9cb15f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9f0f887-42d2-5a4e-a584-fbb9d9cb15f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2e13406f-7ee2-5f30-877a-e37119a83ad6"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2e13406f-7ee2-5f30-877a-e37119a83ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "41922f7e-fb37-47b1-8504-64a9be399972"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "41922f7e-fb37-47b1-8504-64a9be399972",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "41922f7e-fb37-47b1-8504-64a9be399972",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e0a3fd41-22b3-415d-bfe9-6a271022b18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "20340eb5-a519-4650-8b9f-39c799d820fd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9c1608e1-0a17-464d-8eb1-8f180e64f70c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9c1608e1-0a17-464d-8eb1-8f180e64f70c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9c1608e1-0a17-464d-8eb1-8f180e64f70c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "93e9d76e-42f8-4736-82d0-98f1d345a752",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d835e232-e134-4f66-8b9a-448a827397f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9b83d6a1-b237-433d-9dea-112e2ef384ab",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "933de93d-9271-4a0e-98fe-b16af19ef68a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "169e7ab5-3efc-447c-aba4-2ab61a44c9a5"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "169e7ab5-3efc-447c-aba4-2ab61a44c9a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:35.133 22:20:34 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:35.133 22:20:34 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:35.133 22:20:34 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:35.133 22:20:34 -- bdev/blockdev.sh@752 -- # killprocess 110895 00:12:35.133 22:20:34 -- common/autotest_common.sh@926 -- # '[' -z 110895 ']' 00:12:35.133 22:20:34 -- common/autotest_common.sh@930 -- # kill -0 110895 00:12:35.133 22:20:34 -- common/autotest_common.sh@931 -- # uname 00:12:35.133 22:20:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:35.133 22:20:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110895 00:12:35.133 22:20:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:35.133 22:20:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:35.133 22:20:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110895' 00:12:35.133 killing process with pid 110895 00:12:35.133 22:20:34 -- common/autotest_common.sh@945 -- # kill 110895 00:12:35.133 22:20:34 -- common/autotest_common.sh@950 -- # wait 110895 00:12:37.666 22:20:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:37.666 22:20:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:37.666 22:20:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:37.666 22:20:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.666 22:20:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 ************************************ 00:12:37.666 START TEST bdev_hello_world 00:12:37.666 ************************************ 00:12:37.666 22:20:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:37.666 [2024-07-14 22:20:36.717118] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:37.666 [2024-07-14 22:20:36.717633] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110980 ] 00:12:37.925 [2024-07-14 22:20:36.885624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.183 [2024-07-14 22:20:37.066493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.442 [2024-07-14 22:20:37.423012] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.442 [2024-07-14 22:20:37.423362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.442 [2024-07-14 22:20:37.430978] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.442 [2024-07-14 22:20:37.431237] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.442 [2024-07-14 22:20:37.439005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.442 [2024-07-14 22:20:37.439212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:38.442 [2024-07-14 22:20:37.439356] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:38.701 [2024-07-14 22:20:37.633258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.701 [2024-07-14 22:20:37.633685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.701 [2024-07-14 22:20:37.633778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:38.701 [2024-07-14 22:20:37.634026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.701 [2024-07-14 22:20:37.636593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.701 [2024-07-14 22:20:37.636799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:38.959 [2024-07-14 22:20:37.948804] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:38.959 [2024-07-14 22:20:37.949015] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:38.959 [2024-07-14 22:20:37.949197] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:38.959 [2024-07-14 22:20:37.949368] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:38.959 [2024-07-14 22:20:37.949568] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:38.959 [2024-07-14 22:20:37.949716] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:38.959 [2024-07-14 22:20:37.949886] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:38.959 00:12:38.959 [2024-07-14 22:20:37.950046] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:40.861 ************************************ 00:12:40.861 END TEST bdev_hello_world 00:12:40.861 ************************************ 00:12:40.861 00:12:40.861 real 0m3.154s 00:12:40.861 user 0m2.598s 00:12:40.861 sys 0m0.389s 00:12:40.861 22:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.861 22:20:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.861 22:20:39 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:40.861 22:20:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:40.861 22:20:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:40.861 22:20:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.861 ************************************ 00:12:40.861 START TEST bdev_bounds 00:12:40.861 ************************************ 00:12:40.861 22:20:39 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:40.861 22:20:39 -- bdev/blockdev.sh@288 -- # bdevio_pid=111067 00:12:40.861 22:20:39 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:40.861 22:20:39 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:40.861 22:20:39 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 111067' 00:12:40.861 Process bdevio pid: 111067 00:12:40.861 22:20:39 -- bdev/blockdev.sh@291 -- # waitforlisten 111067 00:12:40.861 22:20:39 -- common/autotest_common.sh@819 -- # '[' -z 111067 ']' 00:12:40.861 22:20:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.861 22:20:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:40.861 22:20:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.861 22:20:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:40.861 22:20:39 -- common/autotest_common.sh@10 -- # set +x 00:12:41.119 [2024-07-14 22:20:39.929184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:41.119 [2024-07-14 22:20:39.930280] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111067 ] 00:12:41.119 [2024-07-14 22:20:40.145380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:41.377 [2024-07-14 22:20:40.368262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.378 [2024-07-14 22:20:40.368413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.378 [2024-07-14 22:20:40.368417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.944 [2024-07-14 22:20:40.742729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:41.944 [2024-07-14 22:20:40.743096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:41.944 [2024-07-14 22:20:40.750689] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:41.944 [2024-07-14 22:20:40.750893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:41.944 [2024-07-14 22:20:40.758723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.944 [2024-07-14 22:20:40.758907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:41.944 [2024-07-14 22:20:40.759061] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:41.944 [2024-07-14 22:20:40.954382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.944 [2024-07-14 22:20:40.954823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.944 [2024-07-14 22:20:40.955009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:41.944 [2024-07-14 22:20:40.955140] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.944 [2024-07-14 22:20:40.957914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.945 [2024-07-14 22:20:40.958101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:42.882 22:20:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:42.882 22:20:41 -- common/autotest_common.sh@852 -- # return 0 00:12:42.882 22:20:41 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:42.882 I/O targets: 00:12:42.882 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:42.882 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:42.882 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:42.882 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:42.882 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:42.882 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:42.882 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:42.882 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:42.882 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:42.882 00:12:42.882 00:12:42.882 CUnit - A unit testing framework for C - Version 2.1-3 00:12:42.882 http://cunit.sourceforge.net/ 00:12:42.882 00:12:42.882 00:12:42.882 Suite: bdevio tests on: AIO0 00:12:42.882 Test: blockdev write read block ...passed 00:12:42.882 Test: blockdev write zeroes read block ...passed 00:12:42.882 Test: blockdev write zeroes read no split ...passed 00:12:42.882 Test: blockdev write zeroes read split ...passed 00:12:42.882 Test: blockdev write zeroes read split partial ...passed 00:12:42.882 Test: blockdev reset ...passed 00:12:42.882 Test: blockdev write read 8 blocks ...passed 00:12:42.882 Test: blockdev write read size > 128k ...passed 00:12:42.882 Test: blockdev write read invalid size ...passed 00:12:42.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.882 Test: blockdev write read max offset ...passed 00:12:42.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.882 Test: blockdev writev readv 8 blocks ...passed 00:12:42.882 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.882 Test: blockdev writev readv block ...passed 00:12:42.882 Test: blockdev writev readv size > 128k ...passed 00:12:42.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.882 Test: blockdev comparev and writev ...passed 00:12:42.882 Test: blockdev nvme passthru rw ...passed 00:12:42.882 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.882 Test: blockdev nvme admin passthru ...passed 00:12:42.882 Test: blockdev copy ...passed 00:12:42.882 Suite: bdevio tests on: raid1 00:12:42.882 Test: blockdev write read block ...passed 00:12:42.882 Test: blockdev write zeroes read block ...passed 00:12:42.882 Test: blockdev write zeroes read no split ...passed 00:12:42.882 Test: blockdev write zeroes read split ...passed 00:12:42.882 Test: blockdev write zeroes read split partial ...passed 00:12:42.882 Test: blockdev reset ...passed 00:12:42.882 Test: blockdev write read 8 blocks ...passed 00:12:42.882 Test: blockdev write read size > 128k ...passed 00:12:42.882 Test: blockdev write read invalid size ...passed 00:12:42.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.882 Test: blockdev write read max offset ...passed 00:12:42.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.882 Test: blockdev writev readv 8 blocks ...passed 00:12:42.882 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.882 Test: blockdev writev readv block ...passed 00:12:42.882 Test: blockdev writev readv size > 128k ...passed 00:12:42.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.882 Test: blockdev comparev and writev ...passed 00:12:42.882 Test: blockdev nvme passthru rw ...passed 00:12:42.882 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.882 Test: blockdev nvme admin passthru ...passed 00:12:42.882 Test: blockdev copy ...passed 00:12:42.882 Suite: bdevio tests on: concat0 00:12:42.882 Test: blockdev write read block ...passed 00:12:42.882 Test: blockdev write zeroes read block ...passed 00:12:42.882 Test: blockdev write zeroes read no split ...passed 00:12:42.882 Test: blockdev write zeroes read split ...passed 00:12:42.882 Test: blockdev write zeroes read split partial ...passed 00:12:42.882 Test: blockdev reset ...passed 00:12:42.882 Test: blockdev write read 8 blocks ...passed 00:12:42.882 Test: blockdev write read size > 128k ...passed 00:12:42.882 Test: blockdev write read invalid size ...passed 00:12:42.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.882 Test: blockdev write read max offset ...passed 00:12:42.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.882 Test: blockdev writev readv 8 blocks ...passed 00:12:42.882 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.882 Test: blockdev writev readv block ...passed 00:12:42.882 Test: blockdev writev readv size > 128k ...passed 00:12:42.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.882 Test: blockdev comparev and writev ...passed 00:12:42.882 Test: blockdev nvme passthru rw ...passed 00:12:42.882 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.882 Test: blockdev nvme admin passthru ...passed 00:12:42.882 Test: blockdev copy ...passed 00:12:42.882 Suite: bdevio tests on: raid0 00:12:42.882 Test: blockdev write read block ...passed 00:12:42.882 Test: blockdev write zeroes read block ...passed 00:12:42.882 Test: blockdev write zeroes read no split ...passed 00:12:42.882 Test: blockdev write zeroes read split ...passed 00:12:42.882 Test: blockdev write zeroes read split partial ...passed 00:12:42.882 Test: blockdev reset ...passed 00:12:42.882 Test: blockdev write read 8 blocks ...passed 00:12:42.882 Test: blockdev write read size > 128k ...passed 00:12:42.882 Test: blockdev write read invalid size ...passed 00:12:42.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.882 Test: blockdev write read max offset ...passed 00:12:42.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.882 Test: blockdev writev readv 8 blocks ...passed 00:12:42.882 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.882 Test: blockdev writev readv block ...passed 00:12:42.882 Test: blockdev writev readv size > 128k ...passed 00:12:42.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.882 Test: blockdev comparev and writev ...passed 00:12:42.882 Test: blockdev nvme passthru rw ...passed 00:12:42.882 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.882 Test: blockdev nvme admin passthru ...passed 00:12:42.882 Test: blockdev copy ...passed 00:12:42.882 Suite: bdevio tests on: TestPT 00:12:42.882 Test: blockdev write read block ...passed 00:12:42.882 Test: blockdev write zeroes read block ...passed 00:12:42.882 Test: blockdev write zeroes read no split ...passed 00:12:42.882 Test: blockdev write zeroes read split ...passed 00:12:43.142 Test: blockdev write zeroes read split partial ...passed 00:12:43.142 Test: blockdev reset ...passed 00:12:43.142 Test: blockdev write read 8 blocks ...passed 00:12:43.142 Test: blockdev write read size > 128k ...passed 00:12:43.142 Test: blockdev write read invalid size ...passed 00:12:43.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.142 Test: blockdev write read max offset ...passed 00:12:43.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.142 Test: blockdev writev readv 8 blocks ...passed 00:12:43.142 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.142 Test: blockdev writev readv block ...passed 00:12:43.142 Test: blockdev writev readv size > 128k ...passed 00:12:43.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.142 Test: blockdev comparev and writev ...passed 00:12:43.142 Test: blockdev nvme passthru rw ...passed 00:12:43.142 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.142 Test: blockdev nvme admin passthru ...passed 00:12:43.142 Test: blockdev copy ...passed 00:12:43.142 Suite: bdevio tests on: Malloc2p7 00:12:43.142 Test: blockdev write read block ...passed 00:12:43.142 Test: blockdev write zeroes read block ...passed 00:12:43.142 Test: blockdev write zeroes read no split ...passed 00:12:43.142 Test: blockdev write zeroes read split ...passed 00:12:43.142 Test: blockdev write zeroes read split partial ...passed 00:12:43.142 Test: blockdev reset ...passed 00:12:43.142 Test: blockdev write read 8 blocks ...passed 00:12:43.142 Test: blockdev write read size > 128k ...passed 00:12:43.142 Test: blockdev write read invalid size ...passed 00:12:43.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.142 Test: blockdev write read max offset ...passed 00:12:43.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.142 Test: blockdev writev readv 8 blocks ...passed 00:12:43.142 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.142 Test: blockdev writev readv block ...passed 00:12:43.142 Test: blockdev writev readv size > 128k ...passed 00:12:43.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.142 Test: blockdev comparev and writev ...passed 00:12:43.142 Test: blockdev nvme passthru rw ...passed 00:12:43.142 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.142 Test: blockdev nvme admin passthru ...passed 00:12:43.142 Test: blockdev copy ...passed 00:12:43.142 Suite: bdevio tests on: Malloc2p6 00:12:43.142 Test: blockdev write read block ...passed 00:12:43.142 Test: blockdev write zeroes read block ...passed 00:12:43.142 Test: blockdev write zeroes read no split ...passed 00:12:43.142 Test: blockdev write zeroes read split ...passed 00:12:43.142 Test: blockdev write zeroes read split partial ...passed 00:12:43.142 Test: blockdev reset ...passed 00:12:43.142 Test: blockdev write read 8 blocks ...passed 00:12:43.142 Test: blockdev write read size > 128k ...passed 00:12:43.142 Test: blockdev write read invalid size ...passed 00:12:43.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.142 Test: blockdev write read max offset ...passed 00:12:43.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.142 Test: blockdev writev readv 8 blocks ...passed 00:12:43.142 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.142 Test: blockdev writev readv block ...passed 00:12:43.142 Test: blockdev writev readv size > 128k ...passed 00:12:43.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.142 Test: blockdev comparev and writev ...passed 00:12:43.142 Test: blockdev nvme passthru rw ...passed 00:12:43.142 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.142 Test: blockdev nvme admin passthru ...passed 00:12:43.142 Test: blockdev copy ...passed 00:12:43.142 Suite: bdevio tests on: Malloc2p5 00:12:43.142 Test: blockdev write read block ...passed 00:12:43.142 Test: blockdev write zeroes read block ...passed 00:12:43.142 Test: blockdev write zeroes read no split ...passed 00:12:43.142 Test: blockdev write zeroes read split ...passed 00:12:43.142 Test: blockdev write zeroes read split partial ...passed 00:12:43.142 Test: blockdev reset ...passed 00:12:43.142 Test: blockdev write read 8 blocks ...passed 00:12:43.142 Test: blockdev write read size > 128k ...passed 00:12:43.142 Test: blockdev write read invalid size ...passed 00:12:43.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.142 Test: blockdev write read max offset ...passed 00:12:43.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.142 Test: blockdev writev readv 8 blocks ...passed 00:12:43.142 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.142 Test: blockdev writev readv block ...passed 00:12:43.142 Test: blockdev writev readv size > 128k ...passed 00:12:43.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.142 Test: blockdev comparev and writev ...passed 00:12:43.142 Test: blockdev nvme passthru rw ...passed 00:12:43.142 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.142 Test: blockdev nvme admin passthru ...passed 00:12:43.142 Test: blockdev copy ...passed 00:12:43.142 Suite: bdevio tests on: Malloc2p4 00:12:43.142 Test: blockdev write read block ...passed 00:12:43.142 Test: blockdev write zeroes read block ...passed 00:12:43.142 Test: blockdev write zeroes read no split ...passed 00:12:43.142 Test: blockdev write zeroes read split ...passed 00:12:43.142 Test: blockdev write zeroes read split partial ...passed 00:12:43.142 Test: blockdev reset ...passed 00:12:43.142 Test: blockdev write read 8 blocks ...passed 00:12:43.142 Test: blockdev write read size > 128k ...passed 00:12:43.142 Test: blockdev write read invalid size ...passed 00:12:43.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.142 Test: blockdev write read max offset ...passed 00:12:43.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.402 Test: blockdev writev readv 8 blocks ...passed 00:12:43.402 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.402 Test: blockdev writev readv block ...passed 00:12:43.402 Test: blockdev writev readv size > 128k ...passed 00:12:43.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.402 Test: blockdev comparev and writev ...passed 00:12:43.402 Test: blockdev nvme passthru rw ...passed 00:12:43.402 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.402 Test: blockdev nvme admin passthru ...passed 00:12:43.402 Test: blockdev copy ...passed 00:12:43.402 Suite: bdevio tests on: Malloc2p3 00:12:43.402 Test: blockdev write read block ...passed 00:12:43.402 Test: blockdev write zeroes read block ...passed 00:12:43.402 Test: blockdev write zeroes read no split ...passed 00:12:43.402 Test: blockdev write zeroes read split ...passed 00:12:43.402 Test: blockdev write zeroes read split partial ...passed 00:12:43.402 Test: blockdev reset ...passed 00:12:43.402 Test: blockdev write read 8 blocks ...passed 00:12:43.402 Test: blockdev write read size > 128k ...passed 00:12:43.402 Test: blockdev write read invalid size ...passed 00:12:43.402 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.402 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.402 Test: blockdev write read max offset ...passed 00:12:43.402 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.402 Test: blockdev writev readv 8 blocks ...passed 00:12:43.402 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.402 Test: blockdev writev readv block ...passed 00:12:43.402 Test: blockdev writev readv size > 128k ...passed 00:12:43.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.402 Test: blockdev comparev and writev ...passed 00:12:43.402 Test: blockdev nvme passthru rw ...passed 00:12:43.402 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.402 Test: blockdev nvme admin passthru ...passed 00:12:43.402 Test: blockdev copy ...passed 00:12:43.402 Suite: bdevio tests on: Malloc2p2 00:12:43.402 Test: blockdev write read block ...passed 00:12:43.402 Test: blockdev write zeroes read block ...passed 00:12:43.402 Test: blockdev write zeroes read no split ...passed 00:12:43.402 Test: blockdev write zeroes read split ...passed 00:12:43.402 Test: blockdev write zeroes read split partial ...passed 00:12:43.402 Test: blockdev reset ...passed 00:12:43.402 Test: blockdev write read 8 blocks ...passed 00:12:43.402 Test: blockdev write read size > 128k ...passed 00:12:43.402 Test: blockdev write read invalid size ...passed 00:12:43.402 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.402 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.402 Test: blockdev write read max offset ...passed 00:12:43.402 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.402 Test: blockdev writev readv 8 blocks ...passed 00:12:43.402 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.402 Test: blockdev writev readv block ...passed 00:12:43.402 Test: blockdev writev readv size > 128k ...passed 00:12:43.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.402 Test: blockdev comparev and writev ...passed 00:12:43.402 Test: blockdev nvme passthru rw ...passed 00:12:43.402 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.402 Test: blockdev nvme admin passthru ...passed 00:12:43.402 Test: blockdev copy ...passed 00:12:43.402 Suite: bdevio tests on: Malloc2p1 00:12:43.402 Test: blockdev write read block ...passed 00:12:43.402 Test: blockdev write zeroes read block ...passed 00:12:43.402 Test: blockdev write zeroes read no split ...passed 00:12:43.402 Test: blockdev write zeroes read split ...passed 00:12:43.402 Test: blockdev write zeroes read split partial ...passed 00:12:43.402 Test: blockdev reset ...passed 00:12:43.402 Test: blockdev write read 8 blocks ...passed 00:12:43.402 Test: blockdev write read size > 128k ...passed 00:12:43.402 Test: blockdev write read invalid size ...passed 00:12:43.402 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.402 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.402 Test: blockdev write read max offset ...passed 00:12:43.402 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.402 Test: blockdev writev readv 8 blocks ...passed 00:12:43.402 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.402 Test: blockdev writev readv block ...passed 00:12:43.402 Test: blockdev writev readv size > 128k ...passed 00:12:43.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.402 Test: blockdev comparev and writev ...passed 00:12:43.402 Test: blockdev nvme passthru rw ...passed 00:12:43.402 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.402 Test: blockdev nvme admin passthru ...passed 00:12:43.402 Test: blockdev copy ...passed 00:12:43.402 Suite: bdevio tests on: Malloc2p0 00:12:43.402 Test: blockdev write read block ...passed 00:12:43.402 Test: blockdev write zeroes read block ...passed 00:12:43.402 Test: blockdev write zeroes read no split ...passed 00:12:43.402 Test: blockdev write zeroes read split ...passed 00:12:43.402 Test: blockdev write zeroes read split partial ...passed 00:12:43.402 Test: blockdev reset ...passed 00:12:43.402 Test: blockdev write read 8 blocks ...passed 00:12:43.402 Test: blockdev write read size > 128k ...passed 00:12:43.402 Test: blockdev write read invalid size ...passed 00:12:43.402 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.402 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.402 Test: blockdev write read max offset ...passed 00:12:43.402 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.402 Test: blockdev writev readv 8 blocks ...passed 00:12:43.402 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.402 Test: blockdev writev readv block ...passed 00:12:43.402 Test: blockdev writev readv size > 128k ...passed 00:12:43.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.402 Test: blockdev comparev and writev ...passed 00:12:43.402 Test: blockdev nvme passthru rw ...passed 00:12:43.402 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.402 Test: blockdev nvme admin passthru ...passed 00:12:43.402 Test: blockdev copy ...passed 00:12:43.402 Suite: bdevio tests on: Malloc1p1 00:12:43.402 Test: blockdev write read block ...passed 00:12:43.402 Test: blockdev write zeroes read block ...passed 00:12:43.402 Test: blockdev write zeroes read no split ...passed 00:12:43.402 Test: blockdev write zeroes read split ...passed 00:12:43.662 Test: blockdev write zeroes read split partial ...passed 00:12:43.662 Test: blockdev reset ...passed 00:12:43.662 Test: blockdev write read 8 blocks ...passed 00:12:43.662 Test: blockdev write read size > 128k ...passed 00:12:43.662 Test: blockdev write read invalid size ...passed 00:12:43.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.662 Test: blockdev write read max offset ...passed 00:12:43.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.662 Test: blockdev writev readv 8 blocks ...passed 00:12:43.662 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.662 Test: blockdev writev readv block ...passed 00:12:43.662 Test: blockdev writev readv size > 128k ...passed 00:12:43.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.662 Test: blockdev comparev and writev ...passed 00:12:43.662 Test: blockdev nvme passthru rw ...passed 00:12:43.662 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.662 Test: blockdev nvme admin passthru ...passed 00:12:43.662 Test: blockdev copy ...passed 00:12:43.662 Suite: bdevio tests on: Malloc1p0 00:12:43.662 Test: blockdev write read block ...passed 00:12:43.662 Test: blockdev write zeroes read block ...passed 00:12:43.662 Test: blockdev write zeroes read no split ...passed 00:12:43.662 Test: blockdev write zeroes read split ...passed 00:12:43.662 Test: blockdev write zeroes read split partial ...passed 00:12:43.662 Test: blockdev reset ...passed 00:12:43.662 Test: blockdev write read 8 blocks ...passed 00:12:43.662 Test: blockdev write read size > 128k ...passed 00:12:43.662 Test: blockdev write read invalid size ...passed 00:12:43.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.662 Test: blockdev write read max offset ...passed 00:12:43.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.662 Test: blockdev writev readv 8 blocks ...passed 00:12:43.662 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.662 Test: blockdev writev readv block ...passed 00:12:43.662 Test: blockdev writev readv size > 128k ...passed 00:12:43.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.662 Test: blockdev comparev and writev ...passed 00:12:43.662 Test: blockdev nvme passthru rw ...passed 00:12:43.662 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.662 Test: blockdev nvme admin passthru ...passed 00:12:43.662 Test: blockdev copy ...passed 00:12:43.662 Suite: bdevio tests on: Malloc0 00:12:43.662 Test: blockdev write read block ...passed 00:12:43.662 Test: blockdev write zeroes read block ...passed 00:12:43.662 Test: blockdev write zeroes read no split ...passed 00:12:43.662 Test: blockdev write zeroes read split ...passed 00:12:43.662 Test: blockdev write zeroes read split partial ...passed 00:12:43.662 Test: blockdev reset ...passed 00:12:43.662 Test: blockdev write read 8 blocks ...passed 00:12:43.662 Test: blockdev write read size > 128k ...passed 00:12:43.662 Test: blockdev write read invalid size ...passed 00:12:43.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.662 Test: blockdev write read max offset ...passed 00:12:43.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.662 Test: blockdev writev readv 8 blocks ...passed 00:12:43.662 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.662 Test: blockdev writev readv block ...passed 00:12:43.662 Test: blockdev writev readv size > 128k ...passed 00:12:43.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.662 Test: blockdev comparev and writev ...passed 00:12:43.662 Test: blockdev nvme passthru rw ...passed 00:12:43.662 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.662 Test: blockdev nvme admin passthru ...passed 00:12:43.662 Test: blockdev copy ...passed 00:12:43.662 00:12:43.662 Run Summary: Type Total Ran Passed Failed Inactive 00:12:43.662 suites 16 16 n/a 0 0 00:12:43.662 tests 368 368 368 0 0 00:12:43.662 asserts 2224 2224 2224 0 n/a 00:12:43.662 00:12:43.662 Elapsed time = 2.588 seconds 00:12:43.662 0 00:12:43.662 22:20:42 -- bdev/blockdev.sh@293 -- # killprocess 111067 00:12:43.662 22:20:42 -- common/autotest_common.sh@926 -- # '[' -z 111067 ']' 00:12:43.662 22:20:42 -- common/autotest_common.sh@930 -- # kill -0 111067 00:12:43.662 22:20:42 -- common/autotest_common.sh@931 -- # uname 00:12:43.662 22:20:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.662 22:20:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111067 00:12:43.662 killing process with pid 111067 00:12:43.662 22:20:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:43.662 22:20:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:43.662 22:20:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111067' 00:12:43.662 22:20:42 -- common/autotest_common.sh@945 -- # kill 111067 00:12:43.662 22:20:42 -- common/autotest_common.sh@950 -- # wait 111067 00:12:45.563 22:20:44 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:45.563 00:12:45.563 real 0m4.468s 00:12:45.563 user 0m11.417s 00:12:45.563 sys 0m0.578s 00:12:45.563 ************************************ 00:12:45.563 END TEST bdev_bounds 00:12:45.563 ************************************ 00:12:45.563 22:20:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.563 22:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.563 22:20:44 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:45.563 22:20:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:45.563 22:20:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.563 22:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.563 ************************************ 00:12:45.563 START TEST bdev_nbd 00:12:45.563 ************************************ 00:12:45.563 22:20:44 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:45.563 22:20:44 -- bdev/blockdev.sh@298 -- # uname -s 00:12:45.563 22:20:44 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:45.563 22:20:44 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.563 22:20:44 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:45.563 22:20:44 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:12:45.563 22:20:44 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:45.563 22:20:44 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:45.563 22:20:44 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:45.563 22:20:44 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:12:45.563 22:20:44 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:45.563 22:20:44 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:45.563 22:20:44 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:12:45.563 22:20:44 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:45.563 22:20:44 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:12:45.563 22:20:44 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:45.563 22:20:44 -- bdev/blockdev.sh@316 -- # nbd_pid=111163 00:12:45.563 22:20:44 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:45.563 22:20:44 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:45.563 22:20:44 -- bdev/blockdev.sh@318 -- # waitforlisten 111163 /var/tmp/spdk-nbd.sock 00:12:45.563 22:20:44 -- common/autotest_common.sh@819 -- # '[' -z 111163 ']' 00:12:45.563 22:20:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:45.563 22:20:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.563 22:20:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:45.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:45.563 22:20:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.563 22:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.563 [2024-07-14 22:20:44.438960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:45.563 [2024-07-14 22:20:44.439339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.563 [2024-07-14 22:20:44.597998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.822 [2024-07-14 22:20:44.802200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.389 [2024-07-14 22:20:45.167573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.389 [2024-07-14 22:20:45.167955] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.389 [2024-07-14 22:20:45.175546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.389 [2024-07-14 22:20:45.175744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.389 [2024-07-14 22:20:45.183542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.389 [2024-07-14 22:20:45.183722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:46.389 [2024-07-14 22:20:45.183885] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:46.389 [2024-07-14 22:20:45.372843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.389 [2024-07-14 22:20:45.373280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.389 [2024-07-14 22:20:45.373374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:46.389 [2024-07-14 22:20:45.373588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.389 [2024-07-14 22:20:45.375964] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.389 [2024-07-14 22:20:45.376149] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:47.325 22:20:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.325 22:20:46 -- common/autotest_common.sh@852 -- # return 0 00:12:47.325 22:20:46 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@24 -- # local i 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:47.325 22:20:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:47.325 22:20:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:47.325 22:20:46 -- common/autotest_common.sh@857 -- # local i 00:12:47.325 22:20:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.325 22:20:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.325 22:20:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:47.325 22:20:46 -- common/autotest_common.sh@861 -- # break 00:12:47.325 22:20:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.325 22:20:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.325 22:20:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.325 1+0 records in 00:12:47.325 1+0 records out 00:12:47.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370439 s, 11.1 MB/s 00:12:47.325 22:20:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.326 22:20:46 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.326 22:20:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.326 22:20:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.326 22:20:46 -- common/autotest_common.sh@877 -- # return 0 00:12:47.326 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:47.326 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:47.326 22:20:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:47.584 22:20:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:47.584 22:20:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:47.584 22:20:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:47.584 22:20:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:47.584 22:20:46 -- common/autotest_common.sh@857 -- # local i 00:12:47.584 22:20:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.584 22:20:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.584 22:20:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:47.584 22:20:46 -- common/autotest_common.sh@861 -- # break 00:12:47.584 22:20:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.584 22:20:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.584 22:20:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.584 1+0 records in 00:12:47.584 1+0 records out 00:12:47.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463165 s, 8.8 MB/s 00:12:47.584 22:20:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.584 22:20:46 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.585 22:20:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.585 22:20:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.585 22:20:46 -- common/autotest_common.sh@877 -- # return 0 00:12:47.585 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:47.585 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:47.585 22:20:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:47.844 22:20:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:48.103 22:20:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:48.103 22:20:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:48.103 22:20:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:48.103 22:20:46 -- common/autotest_common.sh@857 -- # local i 00:12:48.103 22:20:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.103 22:20:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.103 22:20:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:48.103 22:20:46 -- common/autotest_common.sh@861 -- # break 00:12:48.103 22:20:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.103 22:20:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.103 22:20:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.103 1+0 records in 00:12:48.103 1+0 records out 00:12:48.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347128 s, 11.8 MB/s 00:12:48.103 22:20:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.103 22:20:46 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.103 22:20:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.103 22:20:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.103 22:20:46 -- common/autotest_common.sh@877 -- # return 0 00:12:48.103 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.103 22:20:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:48.103 22:20:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:48.361 22:20:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:48.361 22:20:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:48.361 22:20:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:48.361 22:20:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:48.361 22:20:47 -- common/autotest_common.sh@857 -- # local i 00:12:48.361 22:20:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.361 22:20:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.361 22:20:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:48.361 22:20:47 -- common/autotest_common.sh@861 -- # break 00:12:48.361 22:20:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.361 22:20:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.361 22:20:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.361 1+0 records in 00:12:48.361 1+0 records out 00:12:48.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991678 s, 4.1 MB/s 00:12:48.361 22:20:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.361 22:20:47 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.362 22:20:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.362 22:20:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.362 22:20:47 -- common/autotest_common.sh@877 -- # return 0 00:12:48.362 22:20:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.362 22:20:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:48.362 22:20:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:48.619 22:20:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:48.619 22:20:47 -- common/autotest_common.sh@857 -- # local i 00:12:48.619 22:20:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:48.619 22:20:47 -- common/autotest_common.sh@861 -- # break 00:12:48.619 22:20:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.619 1+0 records in 00:12:48.619 1+0 records out 00:12:48.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460131 s, 8.9 MB/s 00:12:48.619 22:20:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.619 22:20:47 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.619 22:20:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.619 22:20:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.619 22:20:47 -- common/autotest_common.sh@877 -- # return 0 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:48.619 22:20:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:48.619 22:20:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:48.619 22:20:47 -- common/autotest_common.sh@857 -- # local i 00:12:48.619 22:20:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:48.619 22:20:47 -- common/autotest_common.sh@861 -- # break 00:12:48.619 22:20:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.619 22:20:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.878 1+0 records in 00:12:48.878 1+0 records out 00:12:48.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729051 s, 5.6 MB/s 00:12:48.878 22:20:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.878 22:20:47 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.878 22:20:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.878 22:20:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.878 22:20:47 -- common/autotest_common.sh@877 -- # return 0 00:12:48.878 22:20:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.878 22:20:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:48.878 22:20:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:49.136 22:20:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:49.136 22:20:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:49.136 22:20:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:49.136 22:20:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:49.136 22:20:47 -- common/autotest_common.sh@857 -- # local i 00:12:49.136 22:20:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.136 22:20:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.136 22:20:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:49.136 22:20:47 -- common/autotest_common.sh@861 -- # break 00:12:49.136 22:20:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.136 22:20:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.136 22:20:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.136 1+0 records in 00:12:49.136 1+0 records out 00:12:49.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129987 s, 3.2 MB/s 00:12:49.136 22:20:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.136 22:20:47 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.136 22:20:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.136 22:20:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.136 22:20:48 -- common/autotest_common.sh@877 -- # return 0 00:12:49.136 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.136 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.136 22:20:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:49.393 22:20:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:49.393 22:20:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:49.393 22:20:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:49.393 22:20:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:49.393 22:20:48 -- common/autotest_common.sh@857 -- # local i 00:12:49.393 22:20:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.393 22:20:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.393 22:20:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:49.393 22:20:48 -- common/autotest_common.sh@861 -- # break 00:12:49.393 22:20:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.393 22:20:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.393 22:20:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.393 1+0 records in 00:12:49.393 1+0 records out 00:12:49.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691036 s, 5.9 MB/s 00:12:49.393 22:20:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.393 22:20:48 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.393 22:20:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.393 22:20:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.393 22:20:48 -- common/autotest_common.sh@877 -- # return 0 00:12:49.394 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.394 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.394 22:20:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:49.651 22:20:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:49.651 22:20:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:49.651 22:20:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:49.651 22:20:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:49.651 22:20:48 -- common/autotest_common.sh@857 -- # local i 00:12:49.651 22:20:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.651 22:20:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.651 22:20:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:49.651 22:20:48 -- common/autotest_common.sh@861 -- # break 00:12:49.651 22:20:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.651 22:20:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.651 22:20:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.651 1+0 records in 00:12:49.651 1+0 records out 00:12:49.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507304 s, 8.1 MB/s 00:12:49.651 22:20:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.651 22:20:48 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.651 22:20:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.651 22:20:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.651 22:20:48 -- common/autotest_common.sh@877 -- # return 0 00:12:49.651 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.651 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.651 22:20:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:49.910 22:20:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:49.910 22:20:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:49.910 22:20:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:49.910 22:20:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:49.910 22:20:48 -- common/autotest_common.sh@857 -- # local i 00:12:49.910 22:20:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.910 22:20:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.910 22:20:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:49.910 22:20:48 -- common/autotest_common.sh@861 -- # break 00:12:49.910 22:20:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.910 22:20:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.910 22:20:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.910 1+0 records in 00:12:49.910 1+0 records out 00:12:49.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067998 s, 6.0 MB/s 00:12:49.910 22:20:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.910 22:20:48 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.910 22:20:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.910 22:20:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.910 22:20:48 -- common/autotest_common.sh@877 -- # return 0 00:12:49.910 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.910 22:20:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.910 22:20:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:50.168 22:20:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:50.168 22:20:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:50.168 22:20:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:50.168 22:20:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:50.168 22:20:49 -- common/autotest_common.sh@857 -- # local i 00:12:50.168 22:20:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.168 22:20:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.168 22:20:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:50.168 22:20:49 -- common/autotest_common.sh@861 -- # break 00:12:50.168 22:20:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.168 22:20:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.168 22:20:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.168 1+0 records in 00:12:50.168 1+0 records out 00:12:50.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753322 s, 5.4 MB/s 00:12:50.168 22:20:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.168 22:20:49 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.168 22:20:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.168 22:20:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.168 22:20:49 -- common/autotest_common.sh@877 -- # return 0 00:12:50.168 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.168 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.168 22:20:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:50.464 22:20:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:50.464 22:20:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:50.464 22:20:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:50.464 22:20:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:50.464 22:20:49 -- common/autotest_common.sh@857 -- # local i 00:12:50.464 22:20:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.464 22:20:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.464 22:20:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:50.464 22:20:49 -- common/autotest_common.sh@861 -- # break 00:12:50.464 22:20:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.464 22:20:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.464 22:20:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.464 1+0 records in 00:12:50.464 1+0 records out 00:12:50.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136138 s, 3.0 MB/s 00:12:50.464 22:20:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.464 22:20:49 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.464 22:20:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.464 22:20:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.464 22:20:49 -- common/autotest_common.sh@877 -- # return 0 00:12:50.464 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.464 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.464 22:20:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:50.722 22:20:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:50.722 22:20:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:50.722 22:20:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:50.722 22:20:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:50.722 22:20:49 -- common/autotest_common.sh@857 -- # local i 00:12:50.722 22:20:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.722 22:20:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.722 22:20:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:50.722 22:20:49 -- common/autotest_common.sh@861 -- # break 00:12:50.722 22:20:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.722 22:20:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.722 22:20:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.722 1+0 records in 00:12:50.722 1+0 records out 00:12:50.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709499 s, 5.8 MB/s 00:12:50.722 22:20:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.722 22:20:49 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.722 22:20:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.722 22:20:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.722 22:20:49 -- common/autotest_common.sh@877 -- # return 0 00:12:50.722 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.722 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.722 22:20:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:50.979 22:20:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:50.979 22:20:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:50.979 22:20:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:50.979 22:20:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:50.979 22:20:49 -- common/autotest_common.sh@857 -- # local i 00:12:50.979 22:20:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.979 22:20:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.979 22:20:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:50.979 22:20:49 -- common/autotest_common.sh@861 -- # break 00:12:50.979 22:20:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.979 22:20:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.979 22:20:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.979 1+0 records in 00:12:50.979 1+0 records out 00:12:50.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067063 s, 6.1 MB/s 00:12:50.979 22:20:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.979 22:20:49 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.979 22:20:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.979 22:20:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.979 22:20:49 -- common/autotest_common.sh@877 -- # return 0 00:12:50.979 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.979 22:20:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.979 22:20:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:51.237 22:20:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:51.237 22:20:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:51.237 22:20:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:51.237 22:20:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:51.237 22:20:50 -- common/autotest_common.sh@857 -- # local i 00:12:51.237 22:20:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:51.237 22:20:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:51.237 22:20:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:51.237 22:20:50 -- common/autotest_common.sh@861 -- # break 00:12:51.237 22:20:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:51.237 22:20:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:51.237 22:20:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.237 1+0 records in 00:12:51.237 1+0 records out 00:12:51.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102786 s, 4.0 MB/s 00:12:51.237 22:20:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.237 22:20:50 -- common/autotest_common.sh@874 -- # size=4096 00:12:51.237 22:20:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.237 22:20:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:51.237 22:20:50 -- common/autotest_common.sh@877 -- # return 0 00:12:51.237 22:20:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.237 22:20:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.237 22:20:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:51.496 22:20:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:51.496 22:20:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:51.496 22:20:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:51.496 22:20:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:51.496 22:20:50 -- common/autotest_common.sh@857 -- # local i 00:12:51.496 22:20:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:51.496 22:20:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:51.496 22:20:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:51.496 22:20:50 -- common/autotest_common.sh@861 -- # break 00:12:51.496 22:20:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:51.496 22:20:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:51.496 22:20:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.496 1+0 records in 00:12:51.496 1+0 records out 00:12:51.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00152582 s, 2.7 MB/s 00:12:51.496 22:20:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.496 22:20:50 -- common/autotest_common.sh@874 -- # size=4096 00:12:51.496 22:20:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.496 22:20:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:51.496 22:20:50 -- common/autotest_common.sh@877 -- # return 0 00:12:51.496 22:20:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.496 22:20:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.496 22:20:50 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:51.754 22:20:50 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd0", 00:12:51.754 "bdev_name": "Malloc0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd1", 00:12:51.754 "bdev_name": "Malloc1p0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd2", 00:12:51.754 "bdev_name": "Malloc1p1" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd3", 00:12:51.754 "bdev_name": "Malloc2p0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd4", 00:12:51.754 "bdev_name": "Malloc2p1" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd5", 00:12:51.754 "bdev_name": "Malloc2p2" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd6", 00:12:51.754 "bdev_name": "Malloc2p3" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd7", 00:12:51.754 "bdev_name": "Malloc2p4" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd8", 00:12:51.754 "bdev_name": "Malloc2p5" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd9", 00:12:51.754 "bdev_name": "Malloc2p6" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd10", 00:12:51.754 "bdev_name": "Malloc2p7" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd11", 00:12:51.754 "bdev_name": "TestPT" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd12", 00:12:51.754 "bdev_name": "raid0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd13", 00:12:51.754 "bdev_name": "concat0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd14", 00:12:51.754 "bdev_name": "raid1" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd15", 00:12:51.754 "bdev_name": "AIO0" 00:12:51.754 } 00:12:51.754 ]' 00:12:51.754 22:20:50 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:51.754 22:20:50 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd0", 00:12:51.754 "bdev_name": "Malloc0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd1", 00:12:51.754 "bdev_name": "Malloc1p0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd2", 00:12:51.754 "bdev_name": "Malloc1p1" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd3", 00:12:51.754 "bdev_name": "Malloc2p0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd4", 00:12:51.754 "bdev_name": "Malloc2p1" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd5", 00:12:51.754 "bdev_name": "Malloc2p2" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd6", 00:12:51.754 "bdev_name": "Malloc2p3" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd7", 00:12:51.754 "bdev_name": "Malloc2p4" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd8", 00:12:51.754 "bdev_name": "Malloc2p5" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd9", 00:12:51.754 "bdev_name": "Malloc2p6" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd10", 00:12:51.754 "bdev_name": "Malloc2p7" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd11", 00:12:51.754 "bdev_name": "TestPT" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd12", 00:12:51.754 "bdev_name": "raid0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd13", 00:12:51.754 "bdev_name": "concat0" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd14", 00:12:51.754 "bdev_name": "raid1" 00:12:51.754 }, 00:12:51.754 { 00:12:51.754 "nbd_device": "/dev/nbd15", 00:12:51.754 "bdev_name": "AIO0" 00:12:51.754 } 00:12:51.754 ]' 00:12:51.754 22:20:50 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@51 -- # local i 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.012 22:20:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@41 -- # break 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.270 22:20:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@41 -- # break 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:52.529 22:20:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@41 -- # break 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.787 22:20:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:53.045 22:20:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:53.045 22:20:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:53.045 22:20:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@41 -- # break 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.046 22:20:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@41 -- # break 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.304 22:20:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@41 -- # break 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.563 22:20:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@41 -- # break 00:12:53.821 22:20:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@41 -- # break 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.822 22:20:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@41 -- # break 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.080 22:20:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@41 -- # break 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.338 22:20:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:54.596 22:20:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@41 -- # break 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.854 22:20:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@41 -- # break 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.112 22:20:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:55.112 22:20:54 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@41 -- # break 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.369 22:20:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@41 -- # break 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.628 22:20:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@41 -- # break 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:55.886 22:20:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@41 -- # break 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.144 22:20:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@65 -- # true 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@65 -- # count=0 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@122 -- # count=0 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@127 -- # return 0 00:12:56.403 22:20:55 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@12 -- # local i 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.403 22:20:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:56.662 /dev/nbd0 00:12:56.662 22:20:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:56.662 22:20:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:56.662 22:20:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:56.662 22:20:55 -- common/autotest_common.sh@857 -- # local i 00:12:56.662 22:20:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:56.662 22:20:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:56.662 22:20:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:56.662 22:20:55 -- common/autotest_common.sh@861 -- # break 00:12:56.662 22:20:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:56.662 22:20:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:56.662 22:20:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.662 1+0 records in 00:12:56.662 1+0 records out 00:12:56.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031803 s, 12.9 MB/s 00:12:56.662 22:20:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.662 22:20:55 -- common/autotest_common.sh@874 -- # size=4096 00:12:56.662 22:20:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.662 22:20:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:56.662 22:20:55 -- common/autotest_common.sh@877 -- # return 0 00:12:56.662 22:20:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.662 22:20:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.662 22:20:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:56.921 /dev/nbd1 00:12:56.921 22:20:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:56.921 22:20:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:56.921 22:20:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:56.921 22:20:55 -- common/autotest_common.sh@857 -- # local i 00:12:56.921 22:20:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:56.921 22:20:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:56.921 22:20:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:56.921 22:20:55 -- common/autotest_common.sh@861 -- # break 00:12:56.921 22:20:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:56.921 22:20:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:56.921 22:20:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.921 1+0 records in 00:12:56.921 1+0 records out 00:12:56.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310479 s, 13.2 MB/s 00:12:56.921 22:20:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.921 22:20:55 -- common/autotest_common.sh@874 -- # size=4096 00:12:56.921 22:20:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.921 22:20:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:56.921 22:20:55 -- common/autotest_common.sh@877 -- # return 0 00:12:56.921 22:20:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.921 22:20:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.921 22:20:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:57.180 /dev/nbd10 00:12:57.180 22:20:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:57.180 22:20:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:57.180 22:20:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:57.180 22:20:56 -- common/autotest_common.sh@857 -- # local i 00:12:57.180 22:20:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:57.180 22:20:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:57.180 22:20:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:57.180 22:20:56 -- common/autotest_common.sh@861 -- # break 00:12:57.180 22:20:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:57.180 22:20:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:57.180 22:20:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.180 1+0 records in 00:12:57.180 1+0 records out 00:12:57.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383264 s, 10.7 MB/s 00:12:57.180 22:20:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.180 22:20:56 -- common/autotest_common.sh@874 -- # size=4096 00:12:57.180 22:20:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.180 22:20:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:57.180 22:20:56 -- common/autotest_common.sh@877 -- # return 0 00:12:57.180 22:20:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.180 22:20:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.180 22:20:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:57.438 /dev/nbd11 00:12:57.438 22:20:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:57.438 22:20:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:57.438 22:20:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:57.438 22:20:56 -- common/autotest_common.sh@857 -- # local i 00:12:57.438 22:20:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:57.438 22:20:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:57.438 22:20:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:57.438 22:20:56 -- common/autotest_common.sh@861 -- # break 00:12:57.438 22:20:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:57.438 22:20:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:57.438 22:20:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.438 1+0 records in 00:12:57.438 1+0 records out 00:12:57.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413043 s, 9.9 MB/s 00:12:57.438 22:20:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.438 22:20:56 -- common/autotest_common.sh@874 -- # size=4096 00:12:57.438 22:20:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.438 22:20:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:57.438 22:20:56 -- common/autotest_common.sh@877 -- # return 0 00:12:57.438 22:20:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.438 22:20:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.438 22:20:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:57.696 /dev/nbd12 00:12:57.955 22:20:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:57.955 22:20:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:57.955 22:20:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:57.955 22:20:56 -- common/autotest_common.sh@857 -- # local i 00:12:57.955 22:20:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:57.955 22:20:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:57.955 22:20:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:57.955 22:20:56 -- common/autotest_common.sh@861 -- # break 00:12:57.955 22:20:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:57.955 22:20:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:57.955 22:20:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.955 1+0 records in 00:12:57.955 1+0 records out 00:12:57.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338554 s, 12.1 MB/s 00:12:57.955 22:20:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.955 22:20:56 -- common/autotest_common.sh@874 -- # size=4096 00:12:57.955 22:20:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.955 22:20:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:57.955 22:20:56 -- common/autotest_common.sh@877 -- # return 0 00:12:57.955 22:20:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.955 22:20:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.955 22:20:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:58.214 /dev/nbd13 00:12:58.214 22:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:58.214 22:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:58.214 22:20:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:58.214 22:20:57 -- common/autotest_common.sh@857 -- # local i 00:12:58.214 22:20:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:58.214 22:20:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:58.214 22:20:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:58.214 22:20:57 -- common/autotest_common.sh@861 -- # break 00:12:58.214 22:20:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:58.214 22:20:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:58.214 22:20:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.214 1+0 records in 00:12:58.214 1+0 records out 00:12:58.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326477 s, 12.5 MB/s 00:12:58.214 22:20:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.214 22:20:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:58.214 22:20:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.214 22:20:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:58.214 22:20:57 -- common/autotest_common.sh@877 -- # return 0 00:12:58.214 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.214 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.214 22:20:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:58.473 /dev/nbd14 00:12:58.473 22:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:58.473 22:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:58.473 22:20:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:58.473 22:20:57 -- common/autotest_common.sh@857 -- # local i 00:12:58.473 22:20:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:58.473 22:20:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:58.473 22:20:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:58.473 22:20:57 -- common/autotest_common.sh@861 -- # break 00:12:58.473 22:20:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:58.473 22:20:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:58.473 22:20:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.473 1+0 records in 00:12:58.473 1+0 records out 00:12:58.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109172 s, 3.8 MB/s 00:12:58.473 22:20:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.473 22:20:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:58.473 22:20:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.473 22:20:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:58.474 22:20:57 -- common/autotest_common.sh@877 -- # return 0 00:12:58.474 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.474 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.474 22:20:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:58.474 /dev/nbd15 00:12:58.732 22:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:58.732 22:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:58.732 22:20:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:58.732 22:20:57 -- common/autotest_common.sh@857 -- # local i 00:12:58.732 22:20:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:58.732 22:20:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:58.732 22:20:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:58.732 22:20:57 -- common/autotest_common.sh@861 -- # break 00:12:58.732 22:20:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:58.732 22:20:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:58.732 22:20:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.732 1+0 records in 00:12:58.732 1+0 records out 00:12:58.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386963 s, 10.6 MB/s 00:12:58.732 22:20:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.732 22:20:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:58.732 22:20:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.732 22:20:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:58.732 22:20:57 -- common/autotest_common.sh@877 -- # return 0 00:12:58.732 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.732 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.732 22:20:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:58.991 /dev/nbd2 00:12:58.991 22:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:58.991 22:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:58.991 22:20:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:58.991 22:20:57 -- common/autotest_common.sh@857 -- # local i 00:12:58.991 22:20:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:58.991 22:20:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:58.991 22:20:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:58.991 22:20:57 -- common/autotest_common.sh@861 -- # break 00:12:58.991 22:20:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:58.991 22:20:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:58.991 22:20:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.991 1+0 records in 00:12:58.991 1+0 records out 00:12:58.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622673 s, 6.6 MB/s 00:12:58.991 22:20:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.991 22:20:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:58.991 22:20:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.991 22:20:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:58.991 22:20:57 -- common/autotest_common.sh@877 -- # return 0 00:12:58.991 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.991 22:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.991 22:20:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:58.991 /dev/nbd3 00:12:58.991 22:20:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:58.991 22:20:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:58.991 22:20:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:58.991 22:20:58 -- common/autotest_common.sh@857 -- # local i 00:12:58.991 22:20:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:58.991 22:20:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:58.991 22:20:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:58.991 22:20:58 -- common/autotest_common.sh@861 -- # break 00:12:58.991 22:20:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:58.991 22:20:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:58.991 22:20:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.991 1+0 records in 00:12:58.991 1+0 records out 00:12:58.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688203 s, 6.0 MB/s 00:12:58.991 22:20:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.991 22:20:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:58.991 22:20:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.991 22:20:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:58.991 22:20:58 -- common/autotest_common.sh@877 -- # return 0 00:12:58.991 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.991 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.991 22:20:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:59.250 /dev/nbd4 00:12:59.250 22:20:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:59.250 22:20:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:59.250 22:20:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:59.250 22:20:58 -- common/autotest_common.sh@857 -- # local i 00:12:59.250 22:20:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:59.250 22:20:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:59.250 22:20:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:59.250 22:20:58 -- common/autotest_common.sh@861 -- # break 00:12:59.250 22:20:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:59.250 22:20:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:59.250 22:20:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.508 1+0 records in 00:12:59.508 1+0 records out 00:12:59.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610884 s, 6.7 MB/s 00:12:59.508 22:20:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.508 22:20:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:59.508 22:20:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.508 22:20:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:59.508 22:20:58 -- common/autotest_common.sh@877 -- # return 0 00:12:59.508 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.508 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:59.508 22:20:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:59.767 /dev/nbd5 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:59.767 22:20:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:59.767 22:20:58 -- common/autotest_common.sh@857 -- # local i 00:12:59.767 22:20:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:59.767 22:20:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:59.767 22:20:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:59.767 22:20:58 -- common/autotest_common.sh@861 -- # break 00:12:59.767 22:20:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:59.767 22:20:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:59.767 22:20:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.767 1+0 records in 00:12:59.767 1+0 records out 00:12:59.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065504 s, 6.3 MB/s 00:12:59.767 22:20:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.767 22:20:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:59.767 22:20:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.767 22:20:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:59.767 22:20:58 -- common/autotest_common.sh@877 -- # return 0 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:59.767 /dev/nbd6 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:59.767 22:20:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:59.767 22:20:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:59.767 22:20:58 -- common/autotest_common.sh@857 -- # local i 00:12:59.767 22:20:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:59.767 22:20:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:59.767 22:20:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:13:00.025 22:20:58 -- common/autotest_common.sh@861 -- # break 00:13:00.025 22:20:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:00.025 22:20:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:00.025 22:20:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.025 1+0 records in 00:13:00.025 1+0 records out 00:13:00.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105579 s, 3.9 MB/s 00:13:00.025 22:20:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.026 22:20:58 -- common/autotest_common.sh@874 -- # size=4096 00:13:00.026 22:20:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.026 22:20:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:00.026 22:20:58 -- common/autotest_common.sh@877 -- # return 0 00:13:00.026 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.026 22:20:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.026 22:20:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:00.026 /dev/nbd7 00:13:00.026 22:20:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:00.026 22:20:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:00.026 22:20:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:13:00.026 22:20:59 -- common/autotest_common.sh@857 -- # local i 00:13:00.026 22:20:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:00.026 22:20:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:00.026 22:20:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:13:00.026 22:20:59 -- common/autotest_common.sh@861 -- # break 00:13:00.026 22:20:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:00.026 22:20:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:00.026 22:20:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.026 1+0 records in 00:13:00.026 1+0 records out 00:13:00.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612633 s, 6.7 MB/s 00:13:00.026 22:20:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.026 22:20:59 -- common/autotest_common.sh@874 -- # size=4096 00:13:00.026 22:20:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.026 22:20:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:00.026 22:20:59 -- common/autotest_common.sh@877 -- # return 0 00:13:00.026 22:20:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.026 22:20:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.026 22:20:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:00.593 /dev/nbd8 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:00.593 22:20:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:13:00.593 22:20:59 -- common/autotest_common.sh@857 -- # local i 00:13:00.593 22:20:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:13:00.593 22:20:59 -- common/autotest_common.sh@861 -- # break 00:13:00.593 22:20:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.593 1+0 records in 00:13:00.593 1+0 records out 00:13:00.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133291 s, 3.1 MB/s 00:13:00.593 22:20:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.593 22:20:59 -- common/autotest_common.sh@874 -- # size=4096 00:13:00.593 22:20:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.593 22:20:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:00.593 22:20:59 -- common/autotest_common.sh@877 -- # return 0 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:00.593 /dev/nbd9 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:00.593 22:20:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:13:00.593 22:20:59 -- common/autotest_common.sh@857 -- # local i 00:13:00.593 22:20:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:13:00.593 22:20:59 -- common/autotest_common.sh@861 -- # break 00:13:00.593 22:20:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:00.593 22:20:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.593 1+0 records in 00:13:00.593 1+0 records out 00:13:00.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013778 s, 3.0 MB/s 00:13:00.593 22:20:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.593 22:20:59 -- common/autotest_common.sh@874 -- # size=4096 00:13:00.593 22:20:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.593 22:20:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:00.593 22:20:59 -- common/autotest_common.sh@877 -- # return 0 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.593 22:20:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:00.859 22:20:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd0", 00:13:00.859 "bdev_name": "Malloc0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd1", 00:13:00.859 "bdev_name": "Malloc1p0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd10", 00:13:00.859 "bdev_name": "Malloc1p1" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd11", 00:13:00.859 "bdev_name": "Malloc2p0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd12", 00:13:00.859 "bdev_name": "Malloc2p1" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd13", 00:13:00.859 "bdev_name": "Malloc2p2" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd14", 00:13:00.859 "bdev_name": "Malloc2p3" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd15", 00:13:00.859 "bdev_name": "Malloc2p4" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd2", 00:13:00.859 "bdev_name": "Malloc2p5" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd3", 00:13:00.859 "bdev_name": "Malloc2p6" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd4", 00:13:00.859 "bdev_name": "Malloc2p7" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd5", 00:13:00.859 "bdev_name": "TestPT" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd6", 00:13:00.859 "bdev_name": "raid0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd7", 00:13:00.859 "bdev_name": "concat0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd8", 00:13:00.859 "bdev_name": "raid1" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd9", 00:13:00.859 "bdev_name": "AIO0" 00:13:00.859 } 00:13:00.859 ]' 00:13:00.859 22:20:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd0", 00:13:00.859 "bdev_name": "Malloc0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd1", 00:13:00.859 "bdev_name": "Malloc1p0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd10", 00:13:00.859 "bdev_name": "Malloc1p1" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd11", 00:13:00.859 "bdev_name": "Malloc2p0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd12", 00:13:00.859 "bdev_name": "Malloc2p1" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd13", 00:13:00.859 "bdev_name": "Malloc2p2" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd14", 00:13:00.859 "bdev_name": "Malloc2p3" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd15", 00:13:00.859 "bdev_name": "Malloc2p4" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd2", 00:13:00.859 "bdev_name": "Malloc2p5" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd3", 00:13:00.859 "bdev_name": "Malloc2p6" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd4", 00:13:00.859 "bdev_name": "Malloc2p7" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd5", 00:13:00.859 "bdev_name": "TestPT" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd6", 00:13:00.859 "bdev_name": "raid0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd7", 00:13:00.859 "bdev_name": "concat0" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd8", 00:13:00.859 "bdev_name": "raid1" 00:13:00.859 }, 00:13:00.859 { 00:13:00.859 "nbd_device": "/dev/nbd9", 00:13:00.859 "bdev_name": "AIO0" 00:13:00.859 } 00:13:00.859 ]' 00:13:00.859 22:20:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.130 22:20:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:01.130 /dev/nbd1 00:13:01.130 /dev/nbd10 00:13:01.130 /dev/nbd11 00:13:01.130 /dev/nbd12 00:13:01.130 /dev/nbd13 00:13:01.130 /dev/nbd14 00:13:01.130 /dev/nbd15 00:13:01.130 /dev/nbd2 00:13:01.130 /dev/nbd3 00:13:01.130 /dev/nbd4 00:13:01.130 /dev/nbd5 00:13:01.130 /dev/nbd6 00:13:01.130 /dev/nbd7 00:13:01.130 /dev/nbd8 00:13:01.130 /dev/nbd9' 00:13:01.130 22:20:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:01.130 /dev/nbd1 00:13:01.130 /dev/nbd10 00:13:01.130 /dev/nbd11 00:13:01.130 /dev/nbd12 00:13:01.130 /dev/nbd13 00:13:01.130 /dev/nbd14 00:13:01.130 /dev/nbd15 00:13:01.130 /dev/nbd2 00:13:01.130 /dev/nbd3 00:13:01.130 /dev/nbd4 00:13:01.130 /dev/nbd5 00:13:01.130 /dev/nbd6 00:13:01.130 /dev/nbd7 00:13:01.130 /dev/nbd8 00:13:01.130 /dev/nbd9' 00:13:01.130 22:20:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@65 -- # count=16 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@66 -- # echo 16 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@95 -- # count=16 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:01.131 256+0 records in 00:13:01.131 256+0 records out 00:13:01.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775734 s, 135 MB/s 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.131 22:20:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:01.131 256+0 records in 00:13:01.131 256+0 records out 00:13:01.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13317 s, 7.9 MB/s 00:13:01.131 22:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.131 22:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:01.390 256+0 records in 00:13:01.390 256+0 records out 00:13:01.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193544 s, 5.4 MB/s 00:13:01.390 22:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.390 22:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:01.649 256+0 records in 00:13:01.649 256+0 records out 00:13:01.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172026 s, 6.1 MB/s 00:13:01.649 22:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.649 22:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:01.649 256+0 records in 00:13:01.649 256+0 records out 00:13:01.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135124 s, 7.8 MB/s 00:13:01.649 22:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.649 22:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:01.907 256+0 records in 00:13:01.907 256+0 records out 00:13:01.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133094 s, 7.9 MB/s 00:13:01.907 22:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.907 22:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:01.907 256+0 records in 00:13:01.907 256+0 records out 00:13:01.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124292 s, 8.4 MB/s 00:13:01.907 22:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.907 22:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:02.165 256+0 records in 00:13:02.165 256+0 records out 00:13:02.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137143 s, 7.6 MB/s 00:13:02.165 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.165 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:02.165 256+0 records in 00:13:02.165 256+0 records out 00:13:02.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121315 s, 8.6 MB/s 00:13:02.165 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.165 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:02.422 256+0 records in 00:13:02.422 256+0 records out 00:13:02.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125562 s, 8.4 MB/s 00:13:02.422 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.422 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:02.422 256+0 records in 00:13:02.422 256+0 records out 00:13:02.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130309 s, 8.0 MB/s 00:13:02.422 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.422 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:02.679 256+0 records in 00:13:02.679 256+0 records out 00:13:02.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131365 s, 8.0 MB/s 00:13:02.679 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.679 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:02.679 256+0 records in 00:13:02.679 256+0 records out 00:13:02.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128883 s, 8.1 MB/s 00:13:02.679 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.679 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:02.936 256+0 records in 00:13:02.936 256+0 records out 00:13:02.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130929 s, 8.0 MB/s 00:13:02.936 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.936 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:02.936 256+0 records in 00:13:02.936 256+0 records out 00:13:02.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142131 s, 7.4 MB/s 00:13:02.936 22:21:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.936 22:21:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:03.194 256+0 records in 00:13:03.194 256+0 records out 00:13:03.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130496 s, 8.0 MB/s 00:13:03.194 22:21:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.194 22:21:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:03.452 256+0 records in 00:13:03.452 256+0 records out 00:13:03.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210946 s, 5.0 MB/s 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:03.452 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@51 -- # local i 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.453 22:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@41 -- # break 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.711 22:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@41 -- # break 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.969 22:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@41 -- # break 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.227 22:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:04.486 22:21:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@41 -- # break 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.745 22:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@41 -- # break 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.004 22:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@41 -- # break 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.263 22:21:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@41 -- # break 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.521 22:21:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@41 -- # break 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.780 22:21:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@41 -- # break 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.038 22:21:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:06.296 22:21:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@41 -- # break 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.297 22:21:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@41 -- # break 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.555 22:21:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@41 -- # break 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.814 22:21:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:07.073 22:21:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@41 -- # break 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.331 22:21:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@41 -- # break 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.588 22:21:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@41 -- # break 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.847 22:21:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:08.105 22:21:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@41 -- # break 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.105 22:21:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.106 22:21:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.364 22:21:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:08.364 22:21:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:08.364 22:21:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@65 -- # true 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@65 -- # count=0 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@104 -- # count=0 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@109 -- # return 0 00:13:08.621 22:21:07 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:08.621 malloc_lvol_verify 00:13:08.621 22:21:07 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:08.878 e5fba0b0-c22a-407a-85ff-1ac0dfe5efc2 00:13:08.878 22:21:07 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:09.138 73a1b2e2-bfe2-4903-a134-2f9ce441df03 00:13:09.138 22:21:08 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:09.397 /dev/nbd0 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:09.397 mke2fs 1.45.5 (07-Jan-2020) 00:13:09.397 00:13:09.397 Filesystem too small for a journal 00:13:09.397 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:09.397 00:13:09.397 Allocating group tables: 0/1 done 00:13:09.397 Writing inode tables: 0/1 done 00:13:09.397 Writing superblocks and filesystem accounting information: 0/1 done 00:13:09.397 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@51 -- # local i 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.397 22:21:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.692 22:21:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.692 22:21:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.692 22:21:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.692 22:21:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.692 22:21:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.693 22:21:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.693 22:21:08 -- bdev/nbd_common.sh@41 -- # break 00:13:09.693 22:21:08 -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.693 22:21:08 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:09.693 22:21:08 -- bdev/nbd_common.sh@147 -- # return 0 00:13:09.693 22:21:08 -- bdev/blockdev.sh@324 -- # killprocess 111163 00:13:09.693 22:21:08 -- common/autotest_common.sh@926 -- # '[' -z 111163 ']' 00:13:09.693 22:21:08 -- common/autotest_common.sh@930 -- # kill -0 111163 00:13:09.693 22:21:08 -- common/autotest_common.sh@931 -- # uname 00:13:09.693 22:21:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:09.693 22:21:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111163 00:13:09.950 22:21:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:09.950 killing process with pid 111163 00:13:09.950 22:21:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:09.950 22:21:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111163' 00:13:09.950 22:21:08 -- common/autotest_common.sh@945 -- # kill 111163 00:13:09.950 22:21:08 -- common/autotest_common.sh@950 -- # wait 111163 00:13:11.851 22:21:10 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:11.851 00:13:11.851 real 0m26.252s 00:13:11.851 user 0m35.803s 00:13:11.851 sys 0m8.943s 00:13:11.851 22:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.851 22:21:10 -- common/autotest_common.sh@10 -- # set +x 00:13:11.851 ************************************ 00:13:11.851 END TEST bdev_nbd 00:13:11.851 ************************************ 00:13:11.851 22:21:10 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:11.851 22:21:10 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.851 22:21:10 -- common/autotest_common.sh@10 -- # set +x 00:13:11.851 ************************************ 00:13:11.851 START TEST bdev_fio 00:13:11.851 ************************************ 00:13:11.851 22:21:10 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@329 -- # local env_context 00:13:11.851 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:11.851 22:21:10 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:11.851 22:21:10 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:11.851 22:21:10 -- bdev/blockdev.sh@337 -- # echo '' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:11.851 22:21:10 -- bdev/blockdev.sh@337 -- # env_context= 00:13:11.851 22:21:10 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.851 22:21:10 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:11.851 22:21:10 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:11.851 22:21:10 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:11.851 22:21:10 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:11.851 22:21:10 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.851 22:21:10 -- common/autotest_common.sh@1280 -- # cat 00:13:11.851 22:21:10 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1293 -- # cat 00:13:11.851 22:21:10 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:11.851 22:21:10 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:11.851 22:21:10 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:13:11.851 22:21:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:11.851 22:21:10 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:13:11.851 22:21:10 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:11.851 22:21:10 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.851 22:21:10 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.851 22:21:10 -- common/autotest_common.sh@10 -- # set +x 00:13:11.851 ************************************ 00:13:11.851 START TEST bdev_fio_rw_verify 00:13:11.851 ************************************ 00:13:11.851 22:21:10 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.851 22:21:10 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.851 22:21:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:11.851 22:21:10 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:11.851 22:21:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:11.851 22:21:10 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.851 22:21:10 -- common/autotest_common.sh@1320 -- # shift 00:13:11.851 22:21:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:11.851 22:21:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.851 22:21:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.851 22:21:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:11.851 22:21:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:11.851 22:21:10 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:11.851 22:21:10 -- common/autotest_common.sh@1326 -- # break 00:13:11.851 22:21:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:11.851 22:21:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:12.110 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.110 fio-3.35 00:13:12.110 Starting 16 threads 00:13:24.312 00:13:24.312 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=112414: Sun Jul 14 22:21:22 2024 00:13:24.312 read: IOPS=64.2k, BW=251MiB/s (263MB/s)(2508MiB/10005msec) 00:13:24.312 slat (usec): min=2, max=36047, avg=48.16, stdev=479.58 00:13:24.312 clat (usec): min=10, max=32373, avg=367.23, stdev=1337.43 00:13:24.312 lat (usec): min=31, max=36286, avg=415.39, stdev=1420.52 00:13:24.312 clat percentiles (usec): 00:13:24.312 | 50.000th=[ 227], 99.000th=[ 1844], 99.900th=[16450], 99.990th=[24249], 00:13:24.312 | 99.999th=[32375] 00:13:24.312 write: IOPS=103k, BW=404MiB/s (423MB/s)(4000MiB/9907msec); 0 zone resets 00:13:24.312 slat (usec): min=5, max=58244, avg=75.86, stdev=632.07 00:13:24.312 clat (usec): min=12, max=50162, avg=452.83, stdev=1513.49 00:13:24.312 lat (usec): min=40, max=58756, avg=528.68, stdev=1640.46 00:13:24.312 clat percentiles (usec): 00:13:24.312 | 50.000th=[ 277], 99.000th=[ 8291], 99.900th=[16581], 99.990th=[27395], 00:13:24.312 | 99.999th=[36439] 00:13:24.312 bw ( KiB/s): min=233888, max=635144, per=98.57%, avg=407505.37, stdev=7063.87, samples=304 00:13:24.312 iops : min=58472, max=158786, avg=101876.26, stdev=1765.97, samples=304 00:13:24.312 lat (usec) : 20=0.01%, 50=0.30%, 100=5.06%, 250=43.71%, 500=45.36% 00:13:24.312 lat (usec) : 750=2.85%, 1000=0.75% 00:13:24.312 lat (msec) : 2=0.75%, 4=0.16%, 10=0.21%, 20=0.80%, 50=0.04% 00:13:24.312 lat (msec) : 100=0.01% 00:13:24.312 cpu : usr=58.49%, sys=1.76%, ctx=212218, majf=0, minf=71050 00:13:24.312 IO depths : 1=11.5%, 2=23.7%, 4=51.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.312 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.312 issued rwts: total=642165,1023922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:24.312 00:13:24.312 Run status group 0 (all jobs): 00:13:24.312 READ: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=2508MiB (2630MB), run=10005-10005msec 00:13:24.312 WRITE: bw=404MiB/s (423MB/s), 404MiB/s-404MiB/s (423MB/s-423MB/s), io=4000MiB (4194MB), run=9907-9907msec 00:13:25.689 ----------------------------------------------------- 00:13:25.689 Suppressions used: 00:13:25.689 count bytes template 00:13:25.689 16 140 /usr/src/fio/parse.c 00:13:25.689 11309 1085664 /usr/src/fio/iolog.c 00:13:25.689 2 596 libcrypto.so 00:13:25.689 ----------------------------------------------------- 00:13:25.689 00:13:25.689 00:13:25.689 real 0m13.800s 00:13:25.689 user 1m38.501s 00:13:25.689 sys 0m3.718s 00:13:25.689 22:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.689 22:21:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.689 ************************************ 00:13:25.689 END TEST bdev_fio_rw_verify 00:13:25.689 ************************************ 00:13:25.689 22:21:24 -- bdev/blockdev.sh@348 -- # rm -f 00:13:25.689 22:21:24 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:25.689 22:21:24 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:25.689 22:21:24 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:25.689 22:21:24 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:25.689 22:21:24 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:25.689 22:21:24 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:25.689 22:21:24 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:25.689 22:21:24 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:25.689 22:21:24 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:25.689 22:21:24 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:25.689 22:21:24 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:25.689 22:21:24 -- common/autotest_common.sh@1280 -- # cat 00:13:25.689 22:21:24 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:25.689 22:21:24 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:25.689 22:21:24 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:25.689 22:21:24 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:25.690 22:21:24 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "56fb772f-98cd-4981-8d29-025aa5c96e8e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56fb772f-98cd-4981-8d29-025aa5c96e8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "64c23c71-507a-50d9-b241-1b97b11dee30"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "64c23c71-507a-50d9-b241-1b97b11dee30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "978dfaf6-69b3-570a-9d45-347a0e85093f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "978dfaf6-69b3-570a-9d45-347a0e85093f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f98a8111-9524-5112-8f40-64a2334c7ba0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f98a8111-9524-5112-8f40-64a2334c7ba0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cfa30c1c-d6c4-55c1-8362-82150270ba37"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfa30c1c-d6c4-55c1-8362-82150270ba37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d6947687-9e0b-53fe-8ba6-620acacbaae0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d6947687-9e0b-53fe-8ba6-620acacbaae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8ada1432-a335-5337-a61f-da68cf355761"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8ada1432-a335-5337-a61f-da68cf355761",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5175279d-c7e6-566a-bd8b-11a94dcac5a7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5175279d-c7e6-566a-bd8b-11a94dcac5a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "6617cfa2-0c64-5998-89d8-11d97e64ea58"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6617cfa2-0c64-5998-89d8-11d97e64ea58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "46cca63a-6abd-5cfa-9693-156f07276a37"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "46cca63a-6abd-5cfa-9693-156f07276a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a9f0f887-42d2-5a4e-a584-fbb9d9cb15f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9f0f887-42d2-5a4e-a584-fbb9d9cb15f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2e13406f-7ee2-5f30-877a-e37119a83ad6"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2e13406f-7ee2-5f30-877a-e37119a83ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "41922f7e-fb37-47b1-8504-64a9be399972"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "41922f7e-fb37-47b1-8504-64a9be399972",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "41922f7e-fb37-47b1-8504-64a9be399972",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e0a3fd41-22b3-415d-bfe9-6a271022b18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "20340eb5-a519-4650-8b9f-39c799d820fd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9c1608e1-0a17-464d-8eb1-8f180e64f70c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9c1608e1-0a17-464d-8eb1-8f180e64f70c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9c1608e1-0a17-464d-8eb1-8f180e64f70c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "93e9d76e-42f8-4736-82d0-98f1d345a752",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d835e232-e134-4f66-8b9a-448a827397f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9b83d6a1-b237-433d-9dea-112e2ef384ab",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "933de93d-9271-4a0e-98fe-b16af19ef68a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "169e7ab5-3efc-447c-aba4-2ab61a44c9a5"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "169e7ab5-3efc-447c-aba4-2ab61a44c9a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:25.690 22:21:24 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:25.690 Malloc1p0 00:13:25.690 Malloc1p1 00:13:25.690 Malloc2p0 00:13:25.690 Malloc2p1 00:13:25.690 Malloc2p2 00:13:25.690 Malloc2p3 00:13:25.690 Malloc2p4 00:13:25.690 Malloc2p5 00:13:25.690 Malloc2p6 00:13:25.690 Malloc2p7 00:13:25.690 TestPT 00:13:25.690 raid0 00:13:25.690 concat0 ]] 00:13:25.690 22:21:24 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "56fb772f-98cd-4981-8d29-025aa5c96e8e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56fb772f-98cd-4981-8d29-025aa5c96e8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "64c23c71-507a-50d9-b241-1b97b11dee30"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "64c23c71-507a-50d9-b241-1b97b11dee30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "978dfaf6-69b3-570a-9d45-347a0e85093f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "978dfaf6-69b3-570a-9d45-347a0e85093f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f98a8111-9524-5112-8f40-64a2334c7ba0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f98a8111-9524-5112-8f40-64a2334c7ba0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cfa30c1c-d6c4-55c1-8362-82150270ba37"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfa30c1c-d6c4-55c1-8362-82150270ba37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d6947687-9e0b-53fe-8ba6-620acacbaae0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d6947687-9e0b-53fe-8ba6-620acacbaae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8ada1432-a335-5337-a61f-da68cf355761"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8ada1432-a335-5337-a61f-da68cf355761",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5175279d-c7e6-566a-bd8b-11a94dcac5a7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5175279d-c7e6-566a-bd8b-11a94dcac5a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "6617cfa2-0c64-5998-89d8-11d97e64ea58"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6617cfa2-0c64-5998-89d8-11d97e64ea58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "46cca63a-6abd-5cfa-9693-156f07276a37"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "46cca63a-6abd-5cfa-9693-156f07276a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a9f0f887-42d2-5a4e-a584-fbb9d9cb15f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9f0f887-42d2-5a4e-a584-fbb9d9cb15f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2e13406f-7ee2-5f30-877a-e37119a83ad6"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2e13406f-7ee2-5f30-877a-e37119a83ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "41922f7e-fb37-47b1-8504-64a9be399972"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "41922f7e-fb37-47b1-8504-64a9be399972",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "41922f7e-fb37-47b1-8504-64a9be399972",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e0a3fd41-22b3-415d-bfe9-6a271022b18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "20340eb5-a519-4650-8b9f-39c799d820fd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9c1608e1-0a17-464d-8eb1-8f180e64f70c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9c1608e1-0a17-464d-8eb1-8f180e64f70c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9c1608e1-0a17-464d-8eb1-8f180e64f70c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "93e9d76e-42f8-4736-82d0-98f1d345a752",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d835e232-e134-4f66-8b9a-448a827397f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2ae7b03f-a0c8-43af-9e97-4ea5f02c04af",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9b83d6a1-b237-433d-9dea-112e2ef384ab",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "933de93d-9271-4a0e-98fe-b16af19ef68a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "169e7ab5-3efc-447c-aba4-2ab61a44c9a5"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "169e7ab5-3efc-447c-aba4-2ab61a44c9a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:25.691 22:21:24 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:25.691 22:21:24 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:25.691 22:21:24 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:25.691 22:21:24 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.691 22:21:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:25.691 22:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.691 22:21:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.950 ************************************ 00:13:25.950 START TEST bdev_fio_trim 00:13:25.950 ************************************ 00:13:25.950 22:21:24 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.950 22:21:24 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.950 22:21:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:25.950 22:21:24 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:25.950 22:21:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:25.950 22:21:24 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.950 22:21:24 -- common/autotest_common.sh@1320 -- # shift 00:13:25.950 22:21:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:25.950 22:21:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:25.950 22:21:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.950 22:21:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:25.950 22:21:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:25.950 22:21:24 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:25.950 22:21:24 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:25.950 22:21:24 -- common/autotest_common.sh@1326 -- # break 00:13:25.950 22:21:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:25.950 22:21:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.950 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.950 fio-3.35 00:13:25.950 Starting 14 threads 00:13:38.144 00:13:38.144 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=112658: Sun Jul 14 22:21:36 2024 00:13:38.144 write: IOPS=120k, BW=468MiB/s (491MB/s)(4683MiB/10011msec); 0 zone resets 00:13:38.144 slat (usec): min=2, max=28035, avg=43.54, stdev=410.40 00:13:38.144 clat (usec): min=21, max=33986, avg=284.22, stdev=1047.26 00:13:38.144 lat (usec): min=28, max=34009, avg=327.76, stdev=1124.45 00:13:38.144 clat percentiles (usec): 00:13:38.144 | 50.000th=[ 198], 99.000th=[ 594], 99.900th=[16319], 99.990th=[20317], 00:13:38.144 | 99.999th=[28181] 00:13:38.144 bw ( KiB/s): min=335456, max=668384, per=99.98%, avg=478926.24, stdev=8052.62, samples=267 00:13:38.144 iops : min=83864, max=167096, avg=119731.55, stdev=2013.15, samples=267 00:13:38.144 trim: IOPS=120k, BW=468MiB/s (491MB/s)(4683MiB/10011msec); 0 zone resets 00:13:38.144 slat (usec): min=4, max=32044, avg=28.68, stdev=330.12 00:13:38.144 clat (usec): min=3, max=34010, avg=325.39, stdev=1120.73 00:13:38.144 lat (usec): min=9, max=34028, avg=354.07, stdev=1168.13 00:13:38.144 clat percentiles (usec): 00:13:38.144 | 50.000th=[ 229], 99.000th=[ 709], 99.900th=[16319], 99.990th=[20317], 00:13:38.144 | 99.999th=[28443] 00:13:38.144 bw ( KiB/s): min=335456, max=668384, per=99.98%, avg=478926.24, stdev=8052.67, samples=267 00:13:38.144 iops : min=83864, max=167096, avg=119731.55, stdev=2013.17, samples=267 00:13:38.144 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.37%, 100=4.10% 00:13:38.144 lat (usec) : 250=61.26%, 500=32.82%, 750=0.63%, 1000=0.12% 00:13:38.144 lat (msec) : 2=0.03%, 4=0.02%, 10=0.11%, 20=0.54%, 50=0.02% 00:13:38.144 cpu : usr=68.78%, sys=0.67%, ctx=168660, majf=0, minf=845 00:13:38.144 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.144 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.144 issued rwts: total=0,1198832,1198836,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.145 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:38.145 00:13:38.145 Run status group 0 (all jobs): 00:13:38.145 WRITE: bw=468MiB/s (491MB/s), 468MiB/s-468MiB/s (491MB/s-491MB/s), io=4683MiB (4910MB), run=10011-10011msec 00:13:38.145 TRIM: bw=468MiB/s (491MB/s), 468MiB/s-468MiB/s (491MB/s-491MB/s), io=4683MiB (4910MB), run=10011-10011msec 00:13:39.522 ----------------------------------------------------- 00:13:39.522 Suppressions used: 00:13:39.522 count bytes template 00:13:39.522 14 129 /usr/src/fio/parse.c 00:13:39.522 2 596 libcrypto.so 00:13:39.522 ----------------------------------------------------- 00:13:39.522 00:13:39.522 00:13:39.522 real 0m13.541s 00:13:39.522 user 1m40.969s 00:13:39.522 sys 0m1.729s 00:13:39.522 22:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.522 22:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.522 ************************************ 00:13:39.522 END TEST bdev_fio_trim 00:13:39.522 ************************************ 00:13:39.522 22:21:38 -- bdev/blockdev.sh@366 -- # rm -f 00:13:39.522 22:21:38 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.522 /home/vagrant/spdk_repo/spdk 00:13:39.522 22:21:38 -- bdev/blockdev.sh@368 -- # popd 00:13:39.522 22:21:38 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:39.522 ************************************ 00:13:39.522 END TEST bdev_fio 00:13:39.522 ************************************ 00:13:39.522 00:13:39.522 real 0m27.659s 00:13:39.522 user 3m19.669s 00:13:39.522 sys 0m5.554s 00:13:39.522 22:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.522 22:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.522 22:21:38 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:39.522 22:21:38 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.522 22:21:38 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:39.522 22:21:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.522 22:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.522 ************************************ 00:13:39.522 START TEST bdev_verify 00:13:39.522 ************************************ 00:13:39.522 22:21:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.522 [2024-07-14 22:21:38.443043] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:39.522 [2024-07-14 22:21:38.443179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112865 ] 00:13:39.781 [2024-07-14 22:21:38.603253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.053 [2024-07-14 22:21:38.848223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.054 [2024-07-14 22:21:38.848225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.348 [2024-07-14 22:21:39.233269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:40.348 [2024-07-14 22:21:39.233505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:40.348 [2024-07-14 22:21:39.247383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:40.348 [2024-07-14 22:21:39.247510] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:40.348 [2024-07-14 22:21:39.255418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:40.348 [2024-07-14 22:21:39.255470] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:40.348 [2024-07-14 22:21:39.255537] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:40.607 [2024-07-14 22:21:39.466936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:40.607 [2024-07-14 22:21:39.467110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.607 [2024-07-14 22:21:39.467173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:40.607 [2024-07-14 22:21:39.467202] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.607 [2024-07-14 22:21:39.470062] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.607 [2024-07-14 22:21:39.470126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:40.866 Running I/O for 5 seconds... 00:13:46.135 00:13:46.135 Latency(us) 00:13:46.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.135 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x0 length 0x1000 00:13:46.135 Malloc0 : 5.14 1808.70 7.07 0.00 0.00 70130.03 2219.29 193509.93 00:13:46.135 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x1000 length 0x1000 00:13:46.135 Malloc0 : 5.14 1808.04 7.06 0.00 0.00 70168.36 2129.92 198276.19 00:13:46.135 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x0 length 0x800 00:13:46.135 Malloc1p0 : 5.14 1254.03 4.90 0.00 0.00 100998.64 3589.59 116773.24 00:13:46.135 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x800 length 0x800 00:13:46.135 Malloc1p0 : 5.15 1253.33 4.90 0.00 0.00 101072.13 3708.74 118679.74 00:13:46.135 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x0 length 0x800 00:13:46.135 Malloc1p1 : 5.15 1253.57 4.90 0.00 0.00 100895.73 3470.43 113436.86 00:13:46.135 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x800 length 0x800 00:13:46.135 Malloc1p1 : 5.15 1252.77 4.89 0.00 0.00 100947.73 3559.80 115819.99 00:13:46.135 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x0 length 0x200 00:13:46.135 Malloc2p0 : 5.15 1253.08 4.89 0.00 0.00 100775.23 3544.90 110577.11 00:13:46.135 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x200 length 0x200 00:13:46.135 Malloc2p0 : 5.15 1252.24 4.89 0.00 0.00 100836.57 3678.95 112483.61 00:13:46.135 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.135 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p1 : 5.15 1252.49 4.89 0.00 0.00 100671.17 4200.26 106287.48 00:13:46.136 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p1 : 5.15 1251.79 4.89 0.00 0.00 100697.83 4230.05 108193.98 00:13:46.136 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p2 : 5.17 1263.83 4.94 0.00 0.00 100132.31 3500.22 102474.47 00:13:46.136 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p2 : 5.18 1263.37 4.94 0.00 0.00 100160.05 3604.48 103904.35 00:13:46.136 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p3 : 5.17 1263.50 4.94 0.00 0.00 100005.76 3872.58 98184.84 00:13:46.136 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p3 : 5.18 1262.83 4.93 0.00 0.00 100022.32 3902.37 99614.72 00:13:46.136 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p4 : 5.18 1262.99 4.93 0.00 0.00 99874.87 3708.74 94371.84 00:13:46.136 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p4 : 5.18 1262.29 4.93 0.00 0.00 99920.22 3842.79 95801.72 00:13:46.136 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p5 : 5.18 1262.45 4.93 0.00 0.00 99777.42 3544.90 90558.84 00:13:46.136 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p5 : 5.18 1261.77 4.93 0.00 0.00 99819.91 3768.32 92465.34 00:13:46.136 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p6 : 5.18 1261.91 4.93 0.00 0.00 99685.79 3395.96 87699.08 00:13:46.136 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p6 : 5.18 1261.29 4.93 0.00 0.00 99704.13 3440.64 89128.96 00:13:46.136 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x200 00:13:46.136 Malloc2p7 : 5.18 1261.43 4.93 0.00 0.00 99576.09 3619.37 84362.71 00:13:46.136 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x200 length 0x200 00:13:46.136 Malloc2p7 : 5.19 1260.76 4.92 0.00 0.00 99587.14 3470.43 85315.96 00:13:46.136 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x1000 00:13:46.136 TestPT : 5.19 1247.63 4.87 0.00 0.00 100511.25 6523.81 84839.33 00:13:46.136 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x1000 length 0x1000 00:13:46.136 TestPT : 5.19 1243.47 4.86 0.00 0.00 100776.39 20494.89 86269.21 00:13:46.136 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x2000 00:13:46.136 raid0 : 5.19 1260.43 4.92 0.00 0.00 99312.94 3559.80 74830.20 00:13:46.136 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x2000 length 0x2000 00:13:46.136 raid0 : 5.19 1259.73 4.92 0.00 0.00 99333.87 3842.79 77213.32 00:13:46.136 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x2000 00:13:46.136 concat0 : 5.19 1259.88 4.92 0.00 0.00 99200.53 3783.21 74353.57 00:13:46.136 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x2000 length 0x2000 00:13:46.136 concat0 : 5.19 1259.26 4.92 0.00 0.00 99219.59 3753.43 74830.20 00:13:46.136 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x1000 00:13:46.136 raid1 : 5.19 1259.39 4.92 0.00 0.00 99069.54 4289.63 73876.95 00:13:46.136 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x1000 length 0x1000 00:13:46.136 raid1 : 5.19 1258.68 4.92 0.00 0.00 99089.94 4379.00 74830.20 00:13:46.136 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x0 length 0x4e2 00:13:46.136 AIO0 : 5.19 1257.95 4.91 0.00 0.00 98996.07 3395.96 75783.45 00:13:46.136 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.136 Verification LBA range: start 0x4e2 length 0x4e2 00:13:46.136 AIO0 : 5.20 1257.76 4.91 0.00 0.00 98973.55 3276.80 75783.45 00:13:46.136 =================================================================================================================== 00:13:46.136 Total : 41352.64 161.53 0.00 0.00 97389.42 2129.92 198276.19 00:13:48.038 00:13:48.038 real 0m8.660s 00:13:48.038 user 0m15.451s 00:13:48.038 sys 0m0.727s 00:13:48.038 22:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.038 ************************************ 00:13:48.038 END TEST bdev_verify 00:13:48.038 ************************************ 00:13:48.038 22:21:47 -- common/autotest_common.sh@10 -- # set +x 00:13:48.038 22:21:47 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:48.038 22:21:47 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:48.038 22:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.038 22:21:47 -- common/autotest_common.sh@10 -- # set +x 00:13:48.296 ************************************ 00:13:48.296 START TEST bdev_verify_big_io 00:13:48.296 ************************************ 00:13:48.296 22:21:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:48.296 [2024-07-14 22:21:47.166040] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:48.296 [2024-07-14 22:21:47.166933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113007 ] 00:13:48.296 [2024-07-14 22:21:47.340359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.554 [2024-07-14 22:21:47.537119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.554 [2024-07-14 22:21:47.537127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.119 [2024-07-14 22:21:47.910345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:49.119 [2024-07-14 22:21:47.910482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:49.119 [2024-07-14 22:21:47.918304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:49.119 [2024-07-14 22:21:47.918414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:49.119 [2024-07-14 22:21:47.926343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:49.119 [2024-07-14 22:21:47.926405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:49.119 [2024-07-14 22:21:47.926455] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:49.119 [2024-07-14 22:21:48.112799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:49.119 [2024-07-14 22:21:48.112983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.119 [2024-07-14 22:21:48.113041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:49.119 [2024-07-14 22:21:48.113063] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.119 [2024-07-14 22:21:48.115654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.119 [2024-07-14 22:21:48.115716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:49.687 [2024-07-14 22:21:48.476813] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.480642] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.485153] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.489207] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.492527] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.496568] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.499833] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.503864] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.507126] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.511163] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.514479] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.518525] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.522075] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.526272] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.530521] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.533845] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:49.687 [2024-07-14 22:21:48.612923] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:49.687 [2024-07-14 22:21:48.619402] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:49.687 Running I/O for 5 seconds... 00:13:56.251 00:13:56.251 Latency(us) 00:13:56.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.251 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x100 00:13:56.251 Malloc0 : 5.46 391.50 24.47 0.00 0.00 317379.61 18111.77 896055.85 00:13:56.251 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x100 length 0x100 00:13:56.251 Malloc0 : 5.55 385.54 24.10 0.00 0.00 323440.85 20018.27 1029510.98 00:13:56.251 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x80 00:13:56.251 Malloc1p0 : 5.65 211.61 13.23 0.00 0.00 570711.52 41228.10 1098145.05 00:13:56.251 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x80 length 0x80 00:13:56.251 Malloc1p0 : 5.56 290.71 18.17 0.00 0.00 424468.46 38844.97 926559.88 00:13:56.251 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x80 00:13:56.251 Malloc1p1 : 5.84 129.85 8.12 0.00 0.00 920515.61 44564.48 1860745.77 00:13:56.251 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x80 length 0x80 00:13:56.251 Malloc1p1 : 5.76 131.59 8.22 0.00 0.00 912869.48 42419.67 1883623.80 00:13:56.251 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p0 : 5.58 73.11 4.57 0.00 0.00 410093.56 7804.74 716844.68 00:13:56.251 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p0 : 5.56 73.42 4.59 0.00 0.00 407828.50 7506.85 610080.58 00:13:56.251 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p1 : 5.58 73.10 4.57 0.00 0.00 408532.19 7149.38 705405.67 00:13:56.251 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p1 : 5.62 76.34 4.77 0.00 0.00 394615.91 6851.49 598641.57 00:13:56.251 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p2 : 5.58 73.08 4.57 0.00 0.00 406850.50 7447.27 697779.67 00:13:56.251 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p2 : 5.62 76.33 4.77 0.00 0.00 393167.32 7089.80 587202.56 00:13:56.251 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p3 : 5.58 73.07 4.57 0.00 0.00 405149.74 8996.31 686340.65 00:13:56.251 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p3 : 5.62 76.31 4.77 0.00 0.00 391633.78 8340.95 571950.55 00:13:56.251 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p4 : 5.58 73.05 4.57 0.00 0.00 403423.44 7506.85 667275.64 00:13:56.251 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p4 : 5.62 76.30 4.77 0.00 0.00 389801.99 6702.55 560511.53 00:13:56.251 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p5 : 5.65 75.93 4.75 0.00 0.00 389306.96 7298.33 655836.63 00:13:56.251 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p5 : 5.62 76.28 4.77 0.00 0.00 388308.16 7536.64 545259.52 00:13:56.251 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p6 : 5.65 75.91 4.74 0.00 0.00 387798.62 7119.59 644397.61 00:13:56.251 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p6 : 5.63 76.27 4.77 0.00 0.00 386754.75 7357.91 537633.51 00:13:56.251 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x20 00:13:56.251 Malloc2p7 : 5.65 75.89 4.74 0.00 0.00 386145.08 7745.16 629145.60 00:13:56.251 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x20 length 0x20 00:13:56.251 Malloc2p7 : 5.63 76.25 4.77 0.00 0.00 385323.42 7208.96 522381.50 00:13:56.251 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x100 00:13:56.251 TestPT : 5.91 134.73 8.42 0.00 0.00 846618.01 24069.59 1868371.78 00:13:56.251 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x100 length 0x100 00:13:56.251 TestPT : 5.80 125.66 7.85 0.00 0.00 915994.69 54573.61 1906501.82 00:13:56.251 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x200 00:13:56.251 raid0 : 5.84 141.94 8.87 0.00 0.00 799178.86 40989.79 1875997.79 00:13:56.251 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x200 length 0x200 00:13:56.251 raid0 : 5.81 142.62 8.91 0.00 0.00 804519.19 36938.47 1860745.77 00:13:56.251 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x200 00:13:56.251 concat0 : 5.86 146.40 9.15 0.00 0.00 766998.68 37415.10 1891249.80 00:13:56.251 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x200 length 0x200 00:13:56.251 concat0 : 5.81 142.60 8.91 0.00 0.00 791447.63 41466.41 1860745.77 00:13:56.251 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x100 00:13:56.251 raid1 : 5.84 177.99 11.12 0.00 0.00 624545.48 17039.36 1898875.81 00:13:56.251 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x100 length 0x100 00:13:56.251 raid1 : 5.81 158.69 9.92 0.00 0.00 705740.59 23592.96 1868371.78 00:13:56.251 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x0 length 0x4e 00:13:56.251 AIO0 : 5.86 160.95 10.06 0.00 0.00 413429.39 1273.48 1113397.06 00:13:56.251 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:56.251 Verification LBA range: start 0x4e length 0x4e 00:13:56.251 AIO0 : 5.81 165.19 10.32 0.00 0.00 409209.75 3559.80 1090519.04 00:13:56.252 =================================================================================================================== 00:13:56.252 Total : 4238.18 264.89 0.00 0.00 534224.58 1273.48 1906501.82 00:13:58.152 00:13:58.152 real 0m9.640s 00:13:58.152 user 0m17.558s 00:13:58.152 sys 0m0.656s 00:13:58.152 22:21:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.152 ************************************ 00:13:58.152 END TEST bdev_verify_big_io 00:13:58.152 ************************************ 00:13:58.152 22:21:56 -- common/autotest_common.sh@10 -- # set +x 00:13:58.152 22:21:56 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.152 22:21:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:58.152 22:21:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:58.152 22:21:56 -- common/autotest_common.sh@10 -- # set +x 00:13:58.152 ************************************ 00:13:58.152 START TEST bdev_write_zeroes 00:13:58.152 ************************************ 00:13:58.152 22:21:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.152 [2024-07-14 22:21:56.856747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:58.152 [2024-07-14 22:21:56.857146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113168 ] 00:13:58.152 [2024-07-14 22:21:57.022483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.410 [2024-07-14 22:21:57.237904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.668 [2024-07-14 22:21:57.603171] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:58.668 [2024-07-14 22:21:57.603276] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:58.668 [2024-07-14 22:21:57.611141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:58.669 [2024-07-14 22:21:57.611233] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:58.669 [2024-07-14 22:21:57.619195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:58.669 [2024-07-14 22:21:57.619240] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:58.669 [2024-07-14 22:21:57.619284] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:58.927 [2024-07-14 22:21:57.809844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:58.927 [2024-07-14 22:21:57.809976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.927 [2024-07-14 22:21:57.810027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:58.927 [2024-07-14 22:21:57.810054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.927 [2024-07-14 22:21:57.812541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.927 [2024-07-14 22:21:57.812592] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:59.185 Running I/O for 1 seconds... 00:14:00.564 00:14:00.564 Latency(us) 00:14:00.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.564 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc0 : 1.04 5922.84 23.14 0.00 0.00 21601.30 670.25 38130.04 00:14:00.564 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc1p0 : 1.04 5916.37 23.11 0.00 0.00 21587.12 901.12 37176.79 00:14:00.564 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc1p1 : 1.04 5910.37 23.09 0.00 0.00 21568.73 871.33 36223.53 00:14:00.564 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p0 : 1.04 5904.45 23.06 0.00 0.00 21543.29 904.84 35270.28 00:14:00.564 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p1 : 1.04 5898.45 23.04 0.00 0.00 21525.81 852.71 34555.35 00:14:00.564 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p2 : 1.04 5892.50 23.02 0.00 0.00 21505.63 867.61 33602.09 00:14:00.564 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p3 : 1.04 5886.60 22.99 0.00 0.00 21486.08 848.99 32648.84 00:14:00.564 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p4 : 1.04 5880.74 22.97 0.00 0.00 21470.92 934.63 31695.59 00:14:00.564 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p5 : 1.05 5874.73 22.95 0.00 0.00 21448.35 882.50 30980.65 00:14:00.564 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p6 : 1.05 5868.87 22.93 0.00 0.00 21429.75 889.95 30742.34 00:14:00.564 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 Malloc2p7 : 1.05 5863.00 22.90 0.00 0.00 21407.96 934.63 29789.09 00:14:00.564 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 TestPT : 1.05 5857.15 22.88 0.00 0.00 21380.29 919.74 29074.15 00:14:00.564 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 raid0 : 1.05 5850.31 22.85 0.00 0.00 21360.75 1385.19 27763.43 00:14:00.564 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 concat0 : 1.05 5843.68 22.83 0.00 0.00 21323.29 1385.19 26333.56 00:14:00.564 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 raid1 : 1.05 5835.36 22.79 0.00 0.00 21278.73 2144.81 25022.84 00:14:00.564 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.564 AIO0 : 1.06 5932.19 23.17 0.00 0.00 20833.22 580.89 25022.84 00:14:00.564 =================================================================================================================== 00:14:00.564 Total : 94137.60 367.72 0.00 0.00 21421.22 580.89 38130.04 00:14:02.467 00:14:02.467 real 0m4.334s 00:14:02.467 user 0m3.654s 00:14:02.468 sys 0m0.489s 00:14:02.468 22:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.468 ************************************ 00:14:02.468 END TEST bdev_write_zeroes 00:14:02.468 ************************************ 00:14:02.468 22:22:01 -- common/autotest_common.sh@10 -- # set +x 00:14:02.468 22:22:01 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.468 22:22:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:02.468 22:22:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.468 22:22:01 -- common/autotest_common.sh@10 -- # set +x 00:14:02.468 ************************************ 00:14:02.468 START TEST bdev_json_nonenclosed 00:14:02.468 ************************************ 00:14:02.468 22:22:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.468 [2024-07-14 22:22:01.251083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:02.468 [2024-07-14 22:22:01.251498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113255 ] 00:14:02.468 [2024-07-14 22:22:01.422322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.726 [2024-07-14 22:22:01.606865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.726 [2024-07-14 22:22:01.607124] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:02.726 [2024-07-14 22:22:01.607171] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:02.986 00:14:02.986 real 0m0.774s 00:14:02.986 user 0m0.542s 00:14:02.986 sys 0m0.131s 00:14:02.986 22:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.986 ************************************ 00:14:02.986 END TEST bdev_json_nonenclosed 00:14:02.986 ************************************ 00:14:02.986 22:22:01 -- common/autotest_common.sh@10 -- # set +x 00:14:02.986 22:22:02 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.986 22:22:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:02.986 22:22:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.986 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:14:02.986 ************************************ 00:14:02.986 START TEST bdev_json_nonarray 00:14:02.986 ************************************ 00:14:02.986 22:22:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.245 [2024-07-14 22:22:02.081501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:03.245 [2024-07-14 22:22:02.081883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113293 ] 00:14:03.245 [2024-07-14 22:22:02.250227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.504 [2024-07-14 22:22:02.442798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.504 [2024-07-14 22:22:02.443036] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:03.504 [2024-07-14 22:22:02.443082] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.762 00:14:03.762 real 0m0.771s 00:14:03.762 user 0m0.514s 00:14:03.763 sys 0m0.156s 00:14:03.763 22:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.763 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:14:03.763 ************************************ 00:14:03.763 END TEST bdev_json_nonarray 00:14:03.763 ************************************ 00:14:04.021 22:22:02 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:14:04.021 22:22:02 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:14:04.021 22:22:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:04.021 22:22:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.021 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.021 ************************************ 00:14:04.021 START TEST bdev_qos 00:14:04.021 ************************************ 00:14:04.021 22:22:02 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:14:04.021 22:22:02 -- bdev/blockdev.sh@444 -- # QOS_PID=113331 00:14:04.021 22:22:02 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 113331' 00:14:04.021 Process qos testing pid: 113331 00:14:04.021 22:22:02 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:14:04.021 22:22:02 -- bdev/blockdev.sh@447 -- # waitforlisten 113331 00:14:04.021 22:22:02 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:14:04.021 22:22:02 -- common/autotest_common.sh@819 -- # '[' -z 113331 ']' 00:14:04.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.021 22:22:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.021 22:22:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.021 22:22:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.021 22:22:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.021 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.021 [2024-07-14 22:22:02.909028] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:04.021 [2024-07-14 22:22:02.909215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113331 ] 00:14:04.021 [2024-07-14 22:22:03.078684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.280 [2024-07-14 22:22:03.299547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.846 22:22:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:04.846 22:22:03 -- common/autotest_common.sh@852 -- # return 0 00:14:04.846 22:22:03 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:04.846 22:22:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.846 22:22:03 -- common/autotest_common.sh@10 -- # set +x 00:14:05.104 Malloc_0 00:14:05.104 22:22:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.104 22:22:03 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:14:05.104 22:22:03 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:14:05.104 22:22:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:05.104 22:22:03 -- common/autotest_common.sh@889 -- # local i 00:14:05.104 22:22:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:05.104 22:22:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:05.104 22:22:03 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:05.104 22:22:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.104 22:22:03 -- common/autotest_common.sh@10 -- # set +x 00:14:05.104 22:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.104 22:22:04 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:05.104 22:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.104 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:14:05.104 [ 00:14:05.104 { 00:14:05.104 "name": "Malloc_0", 00:14:05.104 "aliases": [ 00:14:05.104 "d19a5002-d06f-4f9d-87f5-f6d7e74782d3" 00:14:05.104 ], 00:14:05.104 "product_name": "Malloc disk", 00:14:05.104 "block_size": 512, 00:14:05.104 "num_blocks": 262144, 00:14:05.104 "uuid": "d19a5002-d06f-4f9d-87f5-f6d7e74782d3", 00:14:05.104 "assigned_rate_limits": { 00:14:05.104 "rw_ios_per_sec": 0, 00:14:05.104 "rw_mbytes_per_sec": 0, 00:14:05.104 "r_mbytes_per_sec": 0, 00:14:05.104 "w_mbytes_per_sec": 0 00:14:05.104 }, 00:14:05.104 "claimed": false, 00:14:05.104 "zoned": false, 00:14:05.104 "supported_io_types": { 00:14:05.104 "read": true, 00:14:05.104 "write": true, 00:14:05.104 "unmap": true, 00:14:05.104 "write_zeroes": true, 00:14:05.104 "flush": true, 00:14:05.104 "reset": true, 00:14:05.104 "compare": false, 00:14:05.104 "compare_and_write": false, 00:14:05.104 "abort": true, 00:14:05.104 "nvme_admin": false, 00:14:05.104 "nvme_io": false 00:14:05.104 }, 00:14:05.104 "memory_domains": [ 00:14:05.104 { 00:14:05.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.104 "dma_device_type": 2 00:14:05.104 } 00:14:05.104 ], 00:14:05.104 "driver_specific": {} 00:14:05.104 } 00:14:05.104 ] 00:14:05.104 22:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.104 22:22:04 -- common/autotest_common.sh@895 -- # return 0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:05.104 22:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.104 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:14:05.104 Null_1 00:14:05.104 22:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.104 22:22:04 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:14:05.104 22:22:04 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:14:05.104 22:22:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:05.104 22:22:04 -- common/autotest_common.sh@889 -- # local i 00:14:05.104 22:22:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:05.104 22:22:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:05.104 22:22:04 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:05.104 22:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.104 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:14:05.104 22:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.104 22:22:04 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:05.104 22:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.104 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:14:05.104 [ 00:14:05.104 { 00:14:05.104 "name": "Null_1", 00:14:05.104 "aliases": [ 00:14:05.104 "776bf308-123c-4948-9986-87cb3dc3628d" 00:14:05.104 ], 00:14:05.104 "product_name": "Null disk", 00:14:05.104 "block_size": 512, 00:14:05.104 "num_blocks": 262144, 00:14:05.104 "uuid": "776bf308-123c-4948-9986-87cb3dc3628d", 00:14:05.104 "assigned_rate_limits": { 00:14:05.104 "rw_ios_per_sec": 0, 00:14:05.104 "rw_mbytes_per_sec": 0, 00:14:05.104 "r_mbytes_per_sec": 0, 00:14:05.104 "w_mbytes_per_sec": 0 00:14:05.104 }, 00:14:05.104 "claimed": false, 00:14:05.104 "zoned": false, 00:14:05.104 "supported_io_types": { 00:14:05.104 "read": true, 00:14:05.104 "write": true, 00:14:05.104 "unmap": false, 00:14:05.104 "write_zeroes": true, 00:14:05.104 "flush": false, 00:14:05.104 "reset": true, 00:14:05.104 "compare": false, 00:14:05.104 "compare_and_write": false, 00:14:05.104 "abort": true, 00:14:05.104 "nvme_admin": false, 00:14:05.104 "nvme_io": false 00:14:05.104 }, 00:14:05.104 "driver_specific": {} 00:14:05.104 } 00:14:05.104 ] 00:14:05.104 22:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.104 22:22:04 -- common/autotest_common.sh@895 -- # return 0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@455 -- # qos_function_test 00:14:05.104 22:22:04 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.104 22:22:04 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:14:05.104 22:22:04 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:14:05.104 22:22:04 -- bdev/blockdev.sh@410 -- # local io_result=0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:05.104 22:22:04 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:05.104 22:22:04 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:05.104 22:22:04 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:05.104 22:22:04 -- bdev/blockdev.sh@376 -- # tail -1 00:14:05.104 Running I/O for 60 seconds... 00:14:10.368 22:22:09 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 78134.05 312536.22 0.00 0.00 316416.00 0.00 0.00 ' 00:14:10.368 22:22:09 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:10.368 22:22:09 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:10.368 22:22:09 -- bdev/blockdev.sh@378 -- # iostat_result=78134.05 00:14:10.368 22:22:09 -- bdev/blockdev.sh@383 -- # echo 78134 00:14:10.368 22:22:09 -- bdev/blockdev.sh@414 -- # io_result=78134 00:14:10.368 22:22:09 -- bdev/blockdev.sh@416 -- # iops_limit=19000 00:14:10.368 22:22:09 -- bdev/blockdev.sh@417 -- # '[' 19000 -gt 1000 ']' 00:14:10.368 22:22:09 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 19000 Malloc_0 00:14:10.368 22:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.368 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:14:10.368 22:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.368 22:22:09 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 19000 IOPS Malloc_0 00:14:10.368 22:22:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:10.368 22:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.368 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:14:10.368 ************************************ 00:14:10.368 START TEST bdev_qos_iops 00:14:10.368 ************************************ 00:14:10.368 22:22:09 -- common/autotest_common.sh@1104 -- # run_qos_test 19000 IOPS Malloc_0 00:14:10.368 22:22:09 -- bdev/blockdev.sh@387 -- # local qos_limit=19000 00:14:10.368 22:22:09 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:10.368 22:22:09 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:14:10.368 22:22:09 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:10.368 22:22:09 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:10.368 22:22:09 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:10.368 22:22:09 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:10.368 22:22:09 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:10.368 22:22:09 -- bdev/blockdev.sh@376 -- # tail -1 00:14:15.636 22:22:14 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 19011.09 76044.35 0.00 0.00 77140.00 0.00 0.00 ' 00:14:15.636 22:22:14 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:15.636 22:22:14 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:15.636 22:22:14 -- bdev/blockdev.sh@378 -- # iostat_result=19011.09 00:14:15.636 22:22:14 -- bdev/blockdev.sh@383 -- # echo 19011 00:14:15.636 22:22:14 -- bdev/blockdev.sh@390 -- # qos_result=19011 00:14:15.636 22:22:14 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:14:15.636 22:22:14 -- bdev/blockdev.sh@394 -- # lower_limit=17100 00:14:15.636 22:22:14 -- bdev/blockdev.sh@395 -- # upper_limit=20900 00:14:15.636 22:22:14 -- bdev/blockdev.sh@398 -- # '[' 19011 -lt 17100 ']' 00:14:15.636 22:22:14 -- bdev/blockdev.sh@398 -- # '[' 19011 -gt 20900 ']' 00:14:15.636 00:14:15.636 real 0m5.200s 00:14:15.636 user 0m0.104s 00:14:15.636 sys 0m0.029s 00:14:15.636 22:22:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.636 22:22:14 -- common/autotest_common.sh@10 -- # set +x 00:14:15.636 ************************************ 00:14:15.636 END TEST bdev_qos_iops 00:14:15.636 ************************************ 00:14:15.636 22:22:14 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:14:15.636 22:22:14 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:15.636 22:22:14 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:15.636 22:22:14 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:15.636 22:22:14 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:15.636 22:22:14 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:15.636 22:22:14 -- bdev/blockdev.sh@376 -- # tail -1 00:14:20.927 22:22:19 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 30823.92 123295.69 0.00 0.00 124928.00 0.00 0.00 ' 00:14:20.927 22:22:19 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:20.927 22:22:19 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:20.927 22:22:19 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:20.927 22:22:19 -- bdev/blockdev.sh@380 -- # iostat_result=124928.00 00:14:20.927 22:22:19 -- bdev/blockdev.sh@383 -- # echo 124928 00:14:20.927 22:22:19 -- bdev/blockdev.sh@425 -- # bw_limit=124928 00:14:20.927 22:22:19 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:14:20.927 22:22:19 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:14:20.927 22:22:19 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:20.927 22:22:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.927 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:14:20.927 22:22:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.927 22:22:19 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:20.927 22:22:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:20.927 22:22:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:20.927 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:14:20.927 ************************************ 00:14:20.927 START TEST bdev_qos_bw 00:14:20.927 ************************************ 00:14:20.927 22:22:19 -- common/autotest_common.sh@1104 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:20.927 22:22:19 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:14:20.927 22:22:19 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:20.927 22:22:19 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:20.927 22:22:19 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:20.927 22:22:19 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:20.927 22:22:19 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:20.927 22:22:19 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:20.927 22:22:19 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:20.927 22:22:19 -- bdev/blockdev.sh@376 -- # tail -1 00:14:26.190 22:22:24 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3073.66 12294.62 0.00 0.00 12544.00 0.00 0.00 ' 00:14:26.190 22:22:24 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:26.190 22:22:24 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:26.190 22:22:24 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:26.190 22:22:24 -- bdev/blockdev.sh@380 -- # iostat_result=12544.00 00:14:26.190 22:22:24 -- bdev/blockdev.sh@383 -- # echo 12544 00:14:26.190 22:22:24 -- bdev/blockdev.sh@390 -- # qos_result=12544 00:14:26.190 22:22:24 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:26.190 22:22:24 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:14:26.190 22:22:24 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:14:26.190 22:22:24 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:14:26.190 22:22:24 -- bdev/blockdev.sh@398 -- # '[' 12544 -lt 11059 ']' 00:14:26.190 22:22:24 -- bdev/blockdev.sh@398 -- # '[' 12544 -gt 13516 ']' 00:14:26.190 00:14:26.190 real 0m5.226s 00:14:26.190 user 0m0.097s 00:14:26.190 sys 0m0.035s 00:14:26.190 22:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.190 22:22:24 -- common/autotest_common.sh@10 -- # set +x 00:14:26.190 ************************************ 00:14:26.190 END TEST bdev_qos_bw 00:14:26.190 ************************************ 00:14:26.190 22:22:24 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:26.190 22:22:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.190 22:22:24 -- common/autotest_common.sh@10 -- # set +x 00:14:26.190 22:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.190 22:22:25 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:26.190 22:22:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:26.190 22:22:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.190 22:22:25 -- common/autotest_common.sh@10 -- # set +x 00:14:26.190 ************************************ 00:14:26.190 START TEST bdev_qos_ro_bw 00:14:26.190 ************************************ 00:14:26.190 22:22:25 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:26.190 22:22:25 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:26.190 22:22:25 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:26.190 22:22:25 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:26.190 22:22:25 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:26.190 22:22:25 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:26.190 22:22:25 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:26.190 22:22:25 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:26.190 22:22:25 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:26.190 22:22:25 -- bdev/blockdev.sh@376 -- # tail -1 00:14:31.459 22:22:30 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.82 2047.28 0.00 0.00 2060.00 0.00 0.00 ' 00:14:31.459 22:22:30 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:31.459 22:22:30 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:31.459 22:22:30 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:31.459 22:22:30 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:14:31.459 22:22:30 -- bdev/blockdev.sh@383 -- # echo 2060 00:14:31.459 22:22:30 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:14:31.459 22:22:30 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:31.459 22:22:30 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:31.459 22:22:30 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:31.459 22:22:30 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:31.459 22:22:30 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:14:31.459 22:22:30 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:14:31.459 00:14:31.459 real 0m5.159s 00:14:31.459 user 0m0.110s 00:14:31.459 sys 0m0.029s 00:14:31.459 22:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.459 22:22:30 -- common/autotest_common.sh@10 -- # set +x 00:14:31.459 ************************************ 00:14:31.459 END TEST bdev_qos_ro_bw 00:14:31.459 ************************************ 00:14:31.459 22:22:30 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:31.459 22:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.459 22:22:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.027 22:22:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.027 22:22:30 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:32.027 22:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.027 22:22:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.027 00:14:32.027 Latency(us) 00:14:32.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.027 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:32.027 Malloc_0 : 26.61 26663.27 104.15 0.00 0.00 9511.67 1966.08 503316.48 00:14:32.027 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:32.027 Null_1 : 26.80 28288.34 110.50 0.00 0.00 9031.62 618.12 188743.68 00:14:32.027 =================================================================================================================== 00:14:32.027 Total : 54951.60 214.65 0.00 0.00 9263.70 618.12 503316.48 00:14:32.027 0 00:14:32.027 22:22:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.027 22:22:30 -- bdev/blockdev.sh@459 -- # killprocess 113331 00:14:32.027 22:22:30 -- common/autotest_common.sh@926 -- # '[' -z 113331 ']' 00:14:32.027 22:22:30 -- common/autotest_common.sh@930 -- # kill -0 113331 00:14:32.027 22:22:30 -- common/autotest_common.sh@931 -- # uname 00:14:32.027 22:22:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.027 22:22:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113331 00:14:32.027 22:22:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:32.027 killing process with pid 113331 00:14:32.027 22:22:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:32.027 22:22:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113331' 00:14:32.027 22:22:30 -- common/autotest_common.sh@945 -- # kill 113331 00:14:32.027 Received shutdown signal, test time was about 26.837704 seconds 00:14:32.027 00:14:32.027 Latency(us) 00:14:32.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.027 =================================================================================================================== 00:14:32.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.027 22:22:30 -- common/autotest_common.sh@950 -- # wait 113331 00:14:33.405 22:22:32 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:33.405 00:14:33.405 real 0m29.343s 00:14:33.405 user 0m30.064s 00:14:33.405 sys 0m0.597s 00:14:33.405 22:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.405 22:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:33.405 ************************************ 00:14:33.405 END TEST bdev_qos 00:14:33.405 ************************************ 00:14:33.405 22:22:32 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:33.405 22:22:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.405 22:22:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.405 22:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:33.405 ************************************ 00:14:33.405 START TEST bdev_qd_sampling 00:14:33.405 ************************************ 00:14:33.405 22:22:32 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:33.405 22:22:32 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:33.405 22:22:32 -- bdev/blockdev.sh@539 -- # QD_PID=113847 00:14:33.405 Process bdev QD sampling period testing pid: 113847 00:14:33.405 22:22:32 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 113847' 00:14:33.405 22:22:32 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:33.405 22:22:32 -- bdev/blockdev.sh@542 -- # waitforlisten 113847 00:14:33.405 22:22:32 -- common/autotest_common.sh@819 -- # '[' -z 113847 ']' 00:14:33.405 22:22:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.405 22:22:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.405 22:22:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.405 22:22:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.405 22:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:33.405 22:22:32 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:33.405 [2024-07-14 22:22:32.314440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:33.405 [2024-07-14 22:22:32.314807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113847 ] 00:14:33.664 [2024-07-14 22:22:32.494183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.923 [2024-07-14 22:22:32.735369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.923 [2024-07-14 22:22:32.735375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.493 22:22:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.493 22:22:33 -- common/autotest_common.sh@852 -- # return 0 00:14:34.493 22:22:33 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:34.493 22:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.493 22:22:33 -- common/autotest_common.sh@10 -- # set +x 00:14:34.493 Malloc_QD 00:14:34.493 22:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.493 22:22:33 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:34.493 22:22:33 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:34.493 22:22:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:34.493 22:22:33 -- common/autotest_common.sh@889 -- # local i 00:14:34.493 22:22:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:34.493 22:22:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:34.493 22:22:33 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:34.493 22:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.493 22:22:33 -- common/autotest_common.sh@10 -- # set +x 00:14:34.493 22:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.493 22:22:33 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:34.493 22:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.493 22:22:33 -- common/autotest_common.sh@10 -- # set +x 00:14:34.493 [ 00:14:34.493 { 00:14:34.493 "name": "Malloc_QD", 00:14:34.493 "aliases": [ 00:14:34.493 "4f145b8c-fc17-4d31-b190-8d8610c42c21" 00:14:34.493 ], 00:14:34.493 "product_name": "Malloc disk", 00:14:34.493 "block_size": 512, 00:14:34.493 "num_blocks": 262144, 00:14:34.493 "uuid": "4f145b8c-fc17-4d31-b190-8d8610c42c21", 00:14:34.493 "assigned_rate_limits": { 00:14:34.493 "rw_ios_per_sec": 0, 00:14:34.493 "rw_mbytes_per_sec": 0, 00:14:34.493 "r_mbytes_per_sec": 0, 00:14:34.493 "w_mbytes_per_sec": 0 00:14:34.493 }, 00:14:34.493 "claimed": false, 00:14:34.493 "zoned": false, 00:14:34.493 "supported_io_types": { 00:14:34.493 "read": true, 00:14:34.493 "write": true, 00:14:34.493 "unmap": true, 00:14:34.493 "write_zeroes": true, 00:14:34.493 "flush": true, 00:14:34.493 "reset": true, 00:14:34.493 "compare": false, 00:14:34.493 "compare_and_write": false, 00:14:34.493 "abort": true, 00:14:34.493 "nvme_admin": false, 00:14:34.494 "nvme_io": false 00:14:34.494 }, 00:14:34.494 "memory_domains": [ 00:14:34.494 { 00:14:34.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.494 "dma_device_type": 2 00:14:34.494 } 00:14:34.494 ], 00:14:34.494 "driver_specific": {} 00:14:34.494 } 00:14:34.494 ] 00:14:34.494 22:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.494 22:22:33 -- common/autotest_common.sh@895 -- # return 0 00:14:34.494 22:22:33 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:34.494 22:22:33 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.494 Running I/O for 5 seconds... 00:14:36.436 22:22:35 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:36.436 22:22:35 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:36.436 22:22:35 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:36.436 22:22:35 -- bdev/blockdev.sh@519 -- # local iostats 00:14:36.436 22:22:35 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:36.436 22:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.436 22:22:35 -- common/autotest_common.sh@10 -- # set +x 00:14:36.436 22:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.436 22:22:35 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:36.436 22:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.436 22:22:35 -- common/autotest_common.sh@10 -- # set +x 00:14:36.436 22:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.436 22:22:35 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:36.436 "tick_rate": 2200000000, 00:14:36.436 "ticks": 1719140864826, 00:14:36.436 "bdevs": [ 00:14:36.436 { 00:14:36.436 "name": "Malloc_QD", 00:14:36.436 "bytes_read": 941658624, 00:14:36.436 "num_read_ops": 229891, 00:14:36.436 "bytes_written": 0, 00:14:36.436 "num_write_ops": 0, 00:14:36.436 "bytes_unmapped": 0, 00:14:36.436 "num_unmap_ops": 0, 00:14:36.436 "bytes_copied": 0, 00:14:36.436 "num_copy_ops": 0, 00:14:36.436 "read_latency_ticks": 2149144542000, 00:14:36.436 "max_read_latency_ticks": 13246974, 00:14:36.436 "min_read_latency_ticks": 319286, 00:14:36.436 "write_latency_ticks": 0, 00:14:36.436 "max_write_latency_ticks": 0, 00:14:36.436 "min_write_latency_ticks": 0, 00:14:36.436 "unmap_latency_ticks": 0, 00:14:36.436 "max_unmap_latency_ticks": 0, 00:14:36.436 "min_unmap_latency_ticks": 0, 00:14:36.436 "copy_latency_ticks": 0, 00:14:36.436 "max_copy_latency_ticks": 0, 00:14:36.436 "min_copy_latency_ticks": 0, 00:14:36.436 "io_error": {}, 00:14:36.436 "queue_depth_polling_period": 10, 00:14:36.436 "queue_depth": 512, 00:14:36.436 "io_time": 20, 00:14:36.436 "weighted_io_time": 10240 00:14:36.436 } 00:14:36.436 ] 00:14:36.436 }' 00:14:36.436 22:22:35 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:36.695 22:22:35 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:36.695 22:22:35 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:36.695 22:22:35 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:36.695 22:22:35 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:36.695 22:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.695 22:22:35 -- common/autotest_common.sh@10 -- # set +x 00:14:36.695 00:14:36.695 Latency(us) 00:14:36.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.695 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:36.695 Malloc_QD : 1.99 59746.32 233.38 0.00 0.00 4274.46 1496.90 6345.08 00:14:36.695 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:36.695 Malloc_QD : 1.99 60260.69 235.39 0.00 0.00 4237.78 1437.32 5183.30 00:14:36.695 =================================================================================================================== 00:14:36.695 Total : 120007.01 468.78 0.00 0.00 4256.04 1437.32 6345.08 00:14:36.695 0 00:14:36.695 22:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.695 22:22:35 -- bdev/blockdev.sh@552 -- # killprocess 113847 00:14:36.695 22:22:35 -- common/autotest_common.sh@926 -- # '[' -z 113847 ']' 00:14:36.695 22:22:35 -- common/autotest_common.sh@930 -- # kill -0 113847 00:14:36.695 22:22:35 -- common/autotest_common.sh@931 -- # uname 00:14:36.695 22:22:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:36.695 22:22:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113847 00:14:36.695 22:22:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:36.695 22:22:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:36.695 killing process with pid 113847 00:14:36.695 Received shutdown signal, test time was about 2.123343 seconds 00:14:36.695 00:14:36.695 Latency(us) 00:14:36.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.695 =================================================================================================================== 00:14:36.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.695 22:22:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113847' 00:14:36.695 22:22:35 -- common/autotest_common.sh@945 -- # kill 113847 00:14:36.695 22:22:35 -- common/autotest_common.sh@950 -- # wait 113847 00:14:38.071 22:22:36 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:38.071 00:14:38.071 real 0m4.660s 00:14:38.071 user 0m8.573s 00:14:38.071 sys 0m0.444s 00:14:38.071 22:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.071 ************************************ 00:14:38.071 END TEST bdev_qd_sampling 00:14:38.071 ************************************ 00:14:38.071 22:22:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.071 22:22:36 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:38.072 22:22:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:38.072 22:22:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:38.072 22:22:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.072 ************************************ 00:14:38.072 START TEST bdev_error 00:14:38.072 ************************************ 00:14:38.072 22:22:36 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:14:38.072 22:22:36 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:38.072 22:22:36 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:38.072 22:22:36 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:38.072 22:22:36 -- bdev/blockdev.sh@470 -- # ERR_PID=113941 00:14:38.072 22:22:36 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 113941' 00:14:38.072 Process error testing pid: 113941 00:14:38.072 22:22:36 -- bdev/blockdev.sh@472 -- # waitforlisten 113941 00:14:38.072 22:22:36 -- common/autotest_common.sh@819 -- # '[' -z 113941 ']' 00:14:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.072 22:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.072 22:22:36 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:38.072 22:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.072 22:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.072 22:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.072 22:22:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.072 [2024-07-14 22:22:37.023997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:38.072 [2024-07-14 22:22:37.024224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113941 ] 00:14:38.330 [2024-07-14 22:22:37.190498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.330 [2024-07-14 22:22:37.372808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.898 22:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:38.898 22:22:37 -- common/autotest_common.sh@852 -- # return 0 00:14:38.898 22:22:37 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:38.898 22:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.898 22:22:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.158 Dev_1 00:14:39.158 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.158 22:22:38 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:39.158 22:22:38 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:39.158 22:22:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.158 22:22:38 -- common/autotest_common.sh@889 -- # local i 00:14:39.158 22:22:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.158 22:22:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.158 22:22:38 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:39.158 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.158 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.158 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.158 22:22:38 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:39.158 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.158 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.158 [ 00:14:39.158 { 00:14:39.158 "name": "Dev_1", 00:14:39.158 "aliases": [ 00:14:39.158 "4774e4ed-951e-4eeb-a742-ac73f41bb45f" 00:14:39.158 ], 00:14:39.158 "product_name": "Malloc disk", 00:14:39.158 "block_size": 512, 00:14:39.158 "num_blocks": 262144, 00:14:39.158 "uuid": "4774e4ed-951e-4eeb-a742-ac73f41bb45f", 00:14:39.158 "assigned_rate_limits": { 00:14:39.158 "rw_ios_per_sec": 0, 00:14:39.158 "rw_mbytes_per_sec": 0, 00:14:39.158 "r_mbytes_per_sec": 0, 00:14:39.158 "w_mbytes_per_sec": 0 00:14:39.158 }, 00:14:39.158 "claimed": false, 00:14:39.158 "zoned": false, 00:14:39.158 "supported_io_types": { 00:14:39.158 "read": true, 00:14:39.158 "write": true, 00:14:39.158 "unmap": true, 00:14:39.158 "write_zeroes": true, 00:14:39.158 "flush": true, 00:14:39.158 "reset": true, 00:14:39.158 "compare": false, 00:14:39.158 "compare_and_write": false, 00:14:39.158 "abort": true, 00:14:39.158 "nvme_admin": false, 00:14:39.158 "nvme_io": false 00:14:39.158 }, 00:14:39.158 "memory_domains": [ 00:14:39.158 { 00:14:39.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.158 "dma_device_type": 2 00:14:39.158 } 00:14:39.158 ], 00:14:39.158 "driver_specific": {} 00:14:39.158 } 00:14:39.158 ] 00:14:39.158 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.158 22:22:38 -- common/autotest_common.sh@895 -- # return 0 00:14:39.158 22:22:38 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:39.158 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.158 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.158 true 00:14:39.158 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.158 22:22:38 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:39.158 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.158 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.416 Dev_2 00:14:39.416 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.416 22:22:38 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:39.416 22:22:38 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:39.416 22:22:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.416 22:22:38 -- common/autotest_common.sh@889 -- # local i 00:14:39.416 22:22:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.417 22:22:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.417 22:22:38 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:39.417 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.417 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.417 22:22:38 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:39.417 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.417 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 [ 00:14:39.417 { 00:14:39.417 "name": "Dev_2", 00:14:39.417 "aliases": [ 00:14:39.417 "5ba2bc39-d7cf-418c-a776-93398d7b9769" 00:14:39.417 ], 00:14:39.417 "product_name": "Malloc disk", 00:14:39.417 "block_size": 512, 00:14:39.417 "num_blocks": 262144, 00:14:39.417 "uuid": "5ba2bc39-d7cf-418c-a776-93398d7b9769", 00:14:39.417 "assigned_rate_limits": { 00:14:39.417 "rw_ios_per_sec": 0, 00:14:39.417 "rw_mbytes_per_sec": 0, 00:14:39.417 "r_mbytes_per_sec": 0, 00:14:39.417 "w_mbytes_per_sec": 0 00:14:39.417 }, 00:14:39.417 "claimed": false, 00:14:39.417 "zoned": false, 00:14:39.417 "supported_io_types": { 00:14:39.417 "read": true, 00:14:39.417 "write": true, 00:14:39.417 "unmap": true, 00:14:39.417 "write_zeroes": true, 00:14:39.417 "flush": true, 00:14:39.417 "reset": true, 00:14:39.417 "compare": false, 00:14:39.417 "compare_and_write": false, 00:14:39.417 "abort": true, 00:14:39.417 "nvme_admin": false, 00:14:39.417 "nvme_io": false 00:14:39.417 }, 00:14:39.417 "memory_domains": [ 00:14:39.417 { 00:14:39.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.417 "dma_device_type": 2 00:14:39.417 } 00:14:39.417 ], 00:14:39.417 "driver_specific": {} 00:14:39.417 } 00:14:39.417 ] 00:14:39.417 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.417 22:22:38 -- common/autotest_common.sh@895 -- # return 0 00:14:39.417 22:22:38 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:39.417 22:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.417 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 22:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.417 22:22:38 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:39.417 22:22:38 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:39.417 Running I/O for 5 seconds... 00:14:40.352 Process is existed as continue on error is set. Pid: 113941 00:14:40.352 22:22:39 -- bdev/blockdev.sh@485 -- # kill -0 113941 00:14:40.352 22:22:39 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 113941' 00:14:40.352 22:22:39 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:40.352 22:22:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.352 22:22:39 -- common/autotest_common.sh@10 -- # set +x 00:14:40.352 22:22:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.352 22:22:39 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:40.352 22:22:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.352 22:22:39 -- common/autotest_common.sh@10 -- # set +x 00:14:40.352 Timeout while waiting for response: 00:14:40.352 00:14:40.352 00:14:40.610 22:22:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.610 22:22:39 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:44.793 00:14:44.793 Latency(us) 00:14:44.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.794 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:44.794 EE_Dev_1 : 0.91 45356.61 177.17 5.52 0.00 350.26 179.67 748.45 00:14:44.794 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:44.794 Dev_2 : 5.00 92743.16 362.28 0.00 0.00 170.05 54.46 282162.27 00:14:44.794 =================================================================================================================== 00:14:44.794 Total : 138099.77 539.45 5.52 0.00 184.71 54.46 282162.27 00:14:45.726 22:22:44 -- bdev/blockdev.sh@497 -- # killprocess 113941 00:14:45.726 22:22:44 -- common/autotest_common.sh@926 -- # '[' -z 113941 ']' 00:14:45.726 22:22:44 -- common/autotest_common.sh@930 -- # kill -0 113941 00:14:45.726 22:22:44 -- common/autotest_common.sh@931 -- # uname 00:14:45.726 22:22:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:45.726 22:22:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113941 00:14:45.726 22:22:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:45.726 22:22:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:45.726 killing process with pid 113941 00:14:45.726 Received shutdown signal, test time was about 5.000000 seconds 00:14:45.726 00:14:45.726 Latency(us) 00:14:45.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.726 =================================================================================================================== 00:14:45.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.726 22:22:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113941' 00:14:45.726 22:22:44 -- common/autotest_common.sh@945 -- # kill 113941 00:14:45.726 22:22:44 -- common/autotest_common.sh@950 -- # wait 113941 00:14:47.103 22:22:45 -- bdev/blockdev.sh@501 -- # ERR_PID=114081 00:14:47.103 22:22:45 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 114081' 00:14:47.103 Process error testing pid: 114081 00:14:47.103 22:22:45 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:47.103 22:22:45 -- bdev/blockdev.sh@503 -- # waitforlisten 114081 00:14:47.103 22:22:45 -- common/autotest_common.sh@819 -- # '[' -z 114081 ']' 00:14:47.103 22:22:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.103 22:22:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.103 22:22:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.103 22:22:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:47.103 22:22:45 -- common/autotest_common.sh@10 -- # set +x 00:14:47.103 [2024-07-14 22:22:45.990392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:47.103 [2024-07-14 22:22:45.991367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114081 ] 00:14:47.103 [2024-07-14 22:22:46.150264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.362 [2024-07-14 22:22:46.329922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.930 22:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.930 22:22:46 -- common/autotest_common.sh@852 -- # return 0 00:14:47.930 22:22:46 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:47.930 22:22:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.930 22:22:46 -- common/autotest_common.sh@10 -- # set +x 00:14:48.188 Dev_1 00:14:48.188 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.188 22:22:47 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:48.188 22:22:47 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:48.188 22:22:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:48.189 22:22:47 -- common/autotest_common.sh@889 -- # local i 00:14:48.189 22:22:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:48.189 22:22:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:48.189 22:22:47 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 [ 00:14:48.189 { 00:14:48.189 "name": "Dev_1", 00:14:48.189 "aliases": [ 00:14:48.189 "95f0d7b8-b9b1-40ac-b926-6c4df053126a" 00:14:48.189 ], 00:14:48.189 "product_name": "Malloc disk", 00:14:48.189 "block_size": 512, 00:14:48.189 "num_blocks": 262144, 00:14:48.189 "uuid": "95f0d7b8-b9b1-40ac-b926-6c4df053126a", 00:14:48.189 "assigned_rate_limits": { 00:14:48.189 "rw_ios_per_sec": 0, 00:14:48.189 "rw_mbytes_per_sec": 0, 00:14:48.189 "r_mbytes_per_sec": 0, 00:14:48.189 "w_mbytes_per_sec": 0 00:14:48.189 }, 00:14:48.189 "claimed": false, 00:14:48.189 "zoned": false, 00:14:48.189 "supported_io_types": { 00:14:48.189 "read": true, 00:14:48.189 "write": true, 00:14:48.189 "unmap": true, 00:14:48.189 "write_zeroes": true, 00:14:48.189 "flush": true, 00:14:48.189 "reset": true, 00:14:48.189 "compare": false, 00:14:48.189 "compare_and_write": false, 00:14:48.189 "abort": true, 00:14:48.189 "nvme_admin": false, 00:14:48.189 "nvme_io": false 00:14:48.189 }, 00:14:48.189 "memory_domains": [ 00:14:48.189 { 00:14:48.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.189 "dma_device_type": 2 00:14:48.189 } 00:14:48.189 ], 00:14:48.189 "driver_specific": {} 00:14:48.189 } 00:14:48.189 ] 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- common/autotest_common.sh@895 -- # return 0 00:14:48.189 22:22:47 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 true 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 Dev_2 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:48.189 22:22:47 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:48.189 22:22:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:48.189 22:22:47 -- common/autotest_common.sh@889 -- # local i 00:14:48.189 22:22:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:48.189 22:22:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:48.189 22:22:47 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 [ 00:14:48.189 { 00:14:48.189 "name": "Dev_2", 00:14:48.189 "aliases": [ 00:14:48.189 "d360153e-5df1-4b76-b8d3-cdc99e925a9d" 00:14:48.189 ], 00:14:48.189 "product_name": "Malloc disk", 00:14:48.189 "block_size": 512, 00:14:48.189 "num_blocks": 262144, 00:14:48.189 "uuid": "d360153e-5df1-4b76-b8d3-cdc99e925a9d", 00:14:48.189 "assigned_rate_limits": { 00:14:48.189 "rw_ios_per_sec": 0, 00:14:48.189 "rw_mbytes_per_sec": 0, 00:14:48.189 "r_mbytes_per_sec": 0, 00:14:48.189 "w_mbytes_per_sec": 0 00:14:48.189 }, 00:14:48.189 "claimed": false, 00:14:48.189 "zoned": false, 00:14:48.189 "supported_io_types": { 00:14:48.189 "read": true, 00:14:48.189 "write": true, 00:14:48.189 "unmap": true, 00:14:48.189 "write_zeroes": true, 00:14:48.189 "flush": true, 00:14:48.189 "reset": true, 00:14:48.189 "compare": false, 00:14:48.189 "compare_and_write": false, 00:14:48.189 "abort": true, 00:14:48.189 "nvme_admin": false, 00:14:48.189 "nvme_io": false 00:14:48.189 }, 00:14:48.189 "memory_domains": [ 00:14:48.189 { 00:14:48.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.189 "dma_device_type": 2 00:14:48.189 } 00:14:48.189 ], 00:14:48.189 "driver_specific": {} 00:14:48.189 } 00:14:48.189 ] 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- common/autotest_common.sh@895 -- # return 0 00:14:48.189 22:22:47 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:48.189 22:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.189 22:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:48.189 22:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.189 22:22:47 -- bdev/blockdev.sh@513 -- # NOT wait 114081 00:14:48.189 22:22:47 -- common/autotest_common.sh@640 -- # local es=0 00:14:48.189 22:22:47 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 114081 00:14:48.189 22:22:47 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:48.189 22:22:47 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:48.189 22:22:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.189 22:22:47 -- common/autotest_common.sh@632 -- # type -t wait 00:14:48.189 22:22:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.189 22:22:47 -- common/autotest_common.sh@643 -- # wait 114081 00:14:48.448 Running I/O for 5 seconds... 00:14:48.448 task offset: 108584 on job bdev=EE_Dev_1 fails 00:14:48.448 00:14:48.448 Latency(us) 00:14:48.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.448 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:48.448 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:48.448 EE_Dev_1 : 0.00 30261.35 118.21 6877.58 0.00 363.28 121.95 647.91 00:14:48.448 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:48.448 Dev_2 : 0.00 21902.81 85.56 0.00 0.00 507.38 116.83 934.63 00:14:48.448 =================================================================================================================== 00:14:48.448 Total : 52164.15 203.77 6877.58 0.00 441.44 116.83 934.63 00:14:48.448 [2024-07-14 22:22:47.343441] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:48.448 request: 00:14:48.448 { 00:14:48.448 "method": "perform_tests", 00:14:48.448 "req_id": 1 00:14:48.448 } 00:14:48.448 Got JSON-RPC error response 00:14:48.448 response: 00:14:48.448 { 00:14:48.448 "code": -32603, 00:14:48.448 "message": "bdevperf failed with error Operation not permitted" 00:14:48.448 } 00:14:50.360 22:22:48 -- common/autotest_common.sh@643 -- # es=255 00:14:50.360 22:22:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:50.360 22:22:48 -- common/autotest_common.sh@652 -- # es=127 00:14:50.360 22:22:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:50.360 22:22:48 -- common/autotest_common.sh@660 -- # es=1 00:14:50.360 22:22:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:50.360 00:14:50.360 real 0m12.010s 00:14:50.360 user 0m12.114s 00:14:50.360 sys 0m0.899s 00:14:50.360 22:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.360 ************************************ 00:14:50.360 22:22:48 -- common/autotest_common.sh@10 -- # set +x 00:14:50.360 END TEST bdev_error 00:14:50.360 ************************************ 00:14:50.360 22:22:49 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:50.360 22:22:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:50.360 22:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:50.360 22:22:49 -- common/autotest_common.sh@10 -- # set +x 00:14:50.360 ************************************ 00:14:50.360 START TEST bdev_stat 00:14:50.360 ************************************ 00:14:50.360 22:22:49 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:50.360 22:22:49 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:50.360 22:22:49 -- bdev/blockdev.sh@594 -- # STAT_PID=114145 00:14:50.360 Process Bdev IO statistics testing pid: 114145 00:14:50.360 22:22:49 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 114145' 00:14:50.360 22:22:49 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:50.360 22:22:49 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:50.360 22:22:49 -- bdev/blockdev.sh@597 -- # waitforlisten 114145 00:14:50.360 22:22:49 -- common/autotest_common.sh@819 -- # '[' -z 114145 ']' 00:14:50.360 22:22:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.360 22:22:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:50.360 22:22:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.360 22:22:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:50.360 22:22:49 -- common/autotest_common.sh@10 -- # set +x 00:14:50.360 [2024-07-14 22:22:49.086202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:50.360 [2024-07-14 22:22:49.086381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114145 ] 00:14:50.360 [2024-07-14 22:22:49.250082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.618 [2024-07-14 22:22:49.487128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.619 [2024-07-14 22:22:49.487137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.185 22:22:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:51.185 22:22:50 -- common/autotest_common.sh@852 -- # return 0 00:14:51.185 22:22:50 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:51.185 22:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.185 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:14:51.185 Malloc_STAT 00:14:51.186 22:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.186 22:22:50 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:51.186 22:22:50 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:51.186 22:22:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:51.186 22:22:50 -- common/autotest_common.sh@889 -- # local i 00:14:51.186 22:22:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:51.186 22:22:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:51.186 22:22:50 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:51.186 22:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.186 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:14:51.186 22:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.186 22:22:50 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:51.186 22:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.186 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:14:51.186 [ 00:14:51.186 { 00:14:51.186 "name": "Malloc_STAT", 00:14:51.186 "aliases": [ 00:14:51.186 "7aa09db2-9ebe-4cf3-aaa9-5e679ee9506e" 00:14:51.186 ], 00:14:51.186 "product_name": "Malloc disk", 00:14:51.186 "block_size": 512, 00:14:51.186 "num_blocks": 262144, 00:14:51.186 "uuid": "7aa09db2-9ebe-4cf3-aaa9-5e679ee9506e", 00:14:51.186 "assigned_rate_limits": { 00:14:51.186 "rw_ios_per_sec": 0, 00:14:51.186 "rw_mbytes_per_sec": 0, 00:14:51.186 "r_mbytes_per_sec": 0, 00:14:51.186 "w_mbytes_per_sec": 0 00:14:51.186 }, 00:14:51.186 "claimed": false, 00:14:51.186 "zoned": false, 00:14:51.186 "supported_io_types": { 00:14:51.186 "read": true, 00:14:51.186 "write": true, 00:14:51.186 "unmap": true, 00:14:51.186 "write_zeroes": true, 00:14:51.186 "flush": true, 00:14:51.186 "reset": true, 00:14:51.186 "compare": false, 00:14:51.186 "compare_and_write": false, 00:14:51.186 "abort": true, 00:14:51.186 "nvme_admin": false, 00:14:51.186 "nvme_io": false 00:14:51.186 }, 00:14:51.186 "memory_domains": [ 00:14:51.186 { 00:14:51.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.186 "dma_device_type": 2 00:14:51.186 } 00:14:51.186 ], 00:14:51.186 "driver_specific": {} 00:14:51.186 } 00:14:51.186 ] 00:14:51.186 22:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.186 22:22:50 -- common/autotest_common.sh@895 -- # return 0 00:14:51.186 22:22:50 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:51.186 22:22:50 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:51.445 Running I/O for 10 seconds... 00:14:53.351 22:22:52 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:53.351 22:22:52 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:53.351 22:22:52 -- bdev/blockdev.sh@558 -- # local iostats 00:14:53.351 22:22:52 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:53.351 22:22:52 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:53.351 22:22:52 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:53.351 22:22:52 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:53.351 22:22:52 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:53.351 22:22:52 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:53.351 22:22:52 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:53.351 22:22:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.351 22:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:53.351 22:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.351 22:22:52 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:53.351 "tick_rate": 2200000000, 00:14:53.351 "ticks": 1756046770398, 00:14:53.351 "bdevs": [ 00:14:53.351 { 00:14:53.351 "name": "Malloc_STAT", 00:14:53.351 "bytes_read": 960532992, 00:14:53.351 "num_read_ops": 234499, 00:14:53.351 "bytes_written": 0, 00:14:53.351 "num_write_ops": 0, 00:14:53.351 "bytes_unmapped": 0, 00:14:53.351 "num_unmap_ops": 0, 00:14:53.351 "bytes_copied": 0, 00:14:53.351 "num_copy_ops": 0, 00:14:53.351 "read_latency_ticks": 2169134144810, 00:14:53.351 "max_read_latency_ticks": 22953872, 00:14:53.351 "min_read_latency_ticks": 541012, 00:14:53.351 "write_latency_ticks": 0, 00:14:53.351 "max_write_latency_ticks": 0, 00:14:53.351 "min_write_latency_ticks": 0, 00:14:53.351 "unmap_latency_ticks": 0, 00:14:53.351 "max_unmap_latency_ticks": 0, 00:14:53.351 "min_unmap_latency_ticks": 0, 00:14:53.351 "copy_latency_ticks": 0, 00:14:53.351 "max_copy_latency_ticks": 0, 00:14:53.351 "min_copy_latency_ticks": 0, 00:14:53.351 "io_error": {} 00:14:53.351 } 00:14:53.351 ] 00:14:53.351 }' 00:14:53.351 22:22:52 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:53.351 22:22:52 -- bdev/blockdev.sh@567 -- # io_count1=234499 00:14:53.351 22:22:52 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:53.351 22:22:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.351 22:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:53.351 22:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.351 22:22:52 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:53.351 "tick_rate": 2200000000, 00:14:53.351 "ticks": 1756206208606, 00:14:53.351 "name": "Malloc_STAT", 00:14:53.351 "channels": [ 00:14:53.351 { 00:14:53.351 "thread_id": 2, 00:14:53.351 "bytes_read": 493879296, 00:14:53.351 "num_read_ops": 120576, 00:14:53.351 "bytes_written": 0, 00:14:53.351 "num_write_ops": 0, 00:14:53.351 "bytes_unmapped": 0, 00:14:53.351 "num_unmap_ops": 0, 00:14:53.351 "bytes_copied": 0, 00:14:53.351 "num_copy_ops": 0, 00:14:53.351 "read_latency_ticks": 1124845170911, 00:14:53.351 "max_read_latency_ticks": 22953872, 00:14:53.351 "min_read_latency_ticks": 7085950, 00:14:53.351 "write_latency_ticks": 0, 00:14:53.351 "max_write_latency_ticks": 0, 00:14:53.351 "min_write_latency_ticks": 0, 00:14:53.351 "unmap_latency_ticks": 0, 00:14:53.351 "max_unmap_latency_ticks": 0, 00:14:53.351 "min_unmap_latency_ticks": 0, 00:14:53.351 "copy_latency_ticks": 0, 00:14:53.351 "max_copy_latency_ticks": 0, 00:14:53.351 "min_copy_latency_ticks": 0 00:14:53.351 }, 00:14:53.351 { 00:14:53.351 "thread_id": 3, 00:14:53.351 "bytes_read": 501219328, 00:14:53.351 "num_read_ops": 122368, 00:14:53.351 "bytes_written": 0, 00:14:53.351 "num_write_ops": 0, 00:14:53.351 "bytes_unmapped": 0, 00:14:53.351 "num_unmap_ops": 0, 00:14:53.351 "bytes_copied": 0, 00:14:53.351 "num_copy_ops": 0, 00:14:53.351 "read_latency_ticks": 1125891957160, 00:14:53.351 "max_read_latency_ticks": 11532710, 00:14:53.351 "min_read_latency_ticks": 5138676, 00:14:53.351 "write_latency_ticks": 0, 00:14:53.351 "max_write_latency_ticks": 0, 00:14:53.351 "min_write_latency_ticks": 0, 00:14:53.351 "unmap_latency_ticks": 0, 00:14:53.351 "max_unmap_latency_ticks": 0, 00:14:53.351 "min_unmap_latency_ticks": 0, 00:14:53.351 "copy_latency_ticks": 0, 00:14:53.351 "max_copy_latency_ticks": 0, 00:14:53.351 "min_copy_latency_ticks": 0 00:14:53.351 } 00:14:53.351 ] 00:14:53.351 }' 00:14:53.351 22:22:52 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:53.351 22:22:52 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=120576 00:14:53.351 22:22:52 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=120576 00:14:53.352 22:22:52 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:53.352 22:22:52 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=122368 00:14:53.352 22:22:52 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=242944 00:14:53.352 22:22:52 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:53.352 22:22:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.352 22:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:53.609 22:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.609 22:22:52 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:53.609 "tick_rate": 2200000000, 00:14:53.609 "ticks": 1756501705974, 00:14:53.609 "bdevs": [ 00:14:53.610 { 00:14:53.610 "name": "Malloc_STAT", 00:14:53.610 "bytes_read": 1060147712, 00:14:53.610 "num_read_ops": 258819, 00:14:53.610 "bytes_written": 0, 00:14:53.610 "num_write_ops": 0, 00:14:53.610 "bytes_unmapped": 0, 00:14:53.610 "num_unmap_ops": 0, 00:14:53.610 "bytes_copied": 0, 00:14:53.610 "num_copy_ops": 0, 00:14:53.610 "read_latency_ticks": 2402665964527, 00:14:53.610 "max_read_latency_ticks": 22953872, 00:14:53.610 "min_read_latency_ticks": 541012, 00:14:53.610 "write_latency_ticks": 0, 00:14:53.610 "max_write_latency_ticks": 0, 00:14:53.610 "min_write_latency_ticks": 0, 00:14:53.610 "unmap_latency_ticks": 0, 00:14:53.610 "max_unmap_latency_ticks": 0, 00:14:53.610 "min_unmap_latency_ticks": 0, 00:14:53.610 "copy_latency_ticks": 0, 00:14:53.610 "max_copy_latency_ticks": 0, 00:14:53.610 "min_copy_latency_ticks": 0, 00:14:53.610 "io_error": {} 00:14:53.610 } 00:14:53.610 ] 00:14:53.610 }' 00:14:53.610 22:22:52 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:53.610 22:22:52 -- bdev/blockdev.sh@576 -- # io_count2=258819 00:14:53.610 22:22:52 -- bdev/blockdev.sh@581 -- # '[' 242944 -lt 234499 ']' 00:14:53.610 22:22:52 -- bdev/blockdev.sh@581 -- # '[' 242944 -gt 258819 ']' 00:14:53.610 22:22:52 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:53.610 22:22:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.610 22:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:53.610 00:14:53.610 Latency(us) 00:14:53.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.610 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:53.610 Malloc_STAT : 2.21 59958.66 234.21 0.00 0.00 4259.62 1422.43 10485.76 00:14:53.610 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:53.610 Malloc_STAT : 2.21 60998.16 238.27 0.00 0.00 4187.39 1333.06 5242.88 00:14:53.610 =================================================================================================================== 00:14:53.610 Total : 120956.82 472.49 0.00 0.00 4223.19 1333.06 10485.76 00:14:53.610 0 00:14:53.610 22:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.610 22:22:52 -- bdev/blockdev.sh@607 -- # killprocess 114145 00:14:53.610 22:22:52 -- common/autotest_common.sh@926 -- # '[' -z 114145 ']' 00:14:53.610 22:22:52 -- common/autotest_common.sh@930 -- # kill -0 114145 00:14:53.610 22:22:52 -- common/autotest_common.sh@931 -- # uname 00:14:53.610 22:22:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.610 22:22:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114145 00:14:53.610 22:22:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.610 22:22:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.610 22:22:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114145' 00:14:53.610 killing process with pid 114145 00:14:53.610 Received shutdown signal, test time was about 2.342838 seconds 00:14:53.610 00:14:53.610 Latency(us) 00:14:53.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.610 =================================================================================================================== 00:14:53.610 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.610 22:22:52 -- common/autotest_common.sh@945 -- # kill 114145 00:14:53.610 22:22:52 -- common/autotest_common.sh@950 -- # wait 114145 00:14:54.983 22:22:53 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:54.983 00:14:54.983 real 0m4.846s 00:14:54.983 user 0m9.092s 00:14:54.983 sys 0m0.446s 00:14:54.983 22:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.983 ************************************ 00:14:54.983 END TEST bdev_stat 00:14:54.983 ************************************ 00:14:54.983 22:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:54.983 22:22:53 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:54.983 22:22:53 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:54.983 22:22:53 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:54.983 22:22:53 -- bdev/blockdev.sh@809 -- # cleanup 00:14:54.983 22:22:53 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:54.983 22:22:53 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:54.983 22:22:53 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:54.983 22:22:53 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:54.983 22:22:53 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:54.983 22:22:53 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:54.983 ************************************ 00:14:54.983 END TEST blockdev_general 00:14:54.983 ************************************ 00:14:54.983 00:14:54.983 real 2m22.347s 00:14:54.983 user 5m52.440s 00:14:54.983 sys 0m20.924s 00:14:54.983 22:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.983 22:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:54.983 22:22:53 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:54.983 22:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:54.983 22:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.983 22:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:54.983 ************************************ 00:14:54.983 START TEST bdev_raid 00:14:54.983 ************************************ 00:14:54.983 22:22:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:54.983 * Looking for test storage... 00:14:55.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:55.241 22:22:54 -- bdev/nbd_common.sh@6 -- # set -e 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:55.241 22:22:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.241 22:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.241 22:22:54 -- common/autotest_common.sh@10 -- # set +x 00:14:55.241 ************************************ 00:14:55.241 START TEST raid_function_test_raid0 00:14:55.241 ************************************ 00:14:55.241 22:22:54 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@86 -- # raid_pid=114312 00:14:55.241 Process raid pid: 114312 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114312' 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:55.241 22:22:54 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114312 /var/tmp/spdk-raid.sock 00:14:55.241 22:22:54 -- common/autotest_common.sh@819 -- # '[' -z 114312 ']' 00:14:55.241 22:22:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.241 22:22:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.241 22:22:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.241 22:22:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.241 22:22:54 -- common/autotest_common.sh@10 -- # set +x 00:14:55.241 [2024-07-14 22:22:54.129283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:55.241 [2024-07-14 22:22:54.129461] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.241 [2024-07-14 22:22:54.283188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.500 [2024-07-14 22:22:54.488928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.759 [2024-07-14 22:22:54.688908] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.326 22:22:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.326 22:22:55 -- common/autotest_common.sh@852 -- # return 0 00:14:56.326 22:22:55 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:56.326 22:22:55 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:56.326 22:22:55 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:56.326 22:22:55 -- bdev/bdev_raid.sh@70 -- # cat 00:14:56.326 22:22:55 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:56.584 [2024-07-14 22:22:55.416464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:56.584 [2024-07-14 22:22:55.418485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:56.584 [2024-07-14 22:22:55.418568] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:56.584 [2024-07-14 22:22:55.418581] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:56.584 [2024-07-14 22:22:55.418719] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:56.584 [2024-07-14 22:22:55.419079] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:56.584 [2024-07-14 22:22:55.419094] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:56.584 [2024-07-14 22:22:55.419245] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.584 Base_1 00:14:56.584 Base_2 00:14:56.584 22:22:55 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:56.584 22:22:55 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:56.584 22:22:55 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.584 22:22:55 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:56.584 22:22:55 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:56.584 22:22:55 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@12 -- # local i 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.584 22:22:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:56.843 [2024-07-14 22:22:55.856982] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:56.843 /dev/nbd0 00:14:56.843 22:22:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.843 22:22:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.843 22:22:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:56.843 22:22:55 -- common/autotest_common.sh@857 -- # local i 00:14:56.843 22:22:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:56.843 22:22:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:56.843 22:22:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:56.843 22:22:55 -- common/autotest_common.sh@861 -- # break 00:14:56.843 22:22:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:56.843 22:22:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:56.843 22:22:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.843 1+0 records in 00:14:56.843 1+0 records out 00:14:56.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321173 s, 12.8 MB/s 00:14:56.843 22:22:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.843 22:22:55 -- common/autotest_common.sh@874 -- # size=4096 00:14:56.843 22:22:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.843 22:22:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:56.843 22:22:55 -- common/autotest_common.sh@877 -- # return 0 00:14:56.843 22:22:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.843 22:22:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.100 22:22:55 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:57.100 22:22:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:57.100 22:22:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:57.100 22:22:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:57.100 { 00:14:57.100 "nbd_device": "/dev/nbd0", 00:14:57.100 "bdev_name": "raid" 00:14:57.100 } 00:14:57.100 ]' 00:14:57.100 22:22:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:57.100 22:22:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:57.100 { 00:14:57.100 "nbd_device": "/dev/nbd0", 00:14:57.100 "bdev_name": "raid" 00:14:57.100 } 00:14:57.100 ]' 00:14:57.359 22:22:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:57.359 22:22:56 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:57.359 22:22:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:57.359 22:22:56 -- bdev/nbd_common.sh@65 -- # count=1 00:14:57.359 22:22:56 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:57.359 4096+0 records in 00:14:57.359 4096+0 records out 00:14:57.359 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0249583 s, 84.0 MB/s 00:14:57.359 22:22:56 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:57.618 4096+0 records in 00:14:57.618 4096+0 records out 00:14:57.618 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.24126 s, 8.7 MB/s 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:57.618 128+0 records in 00:14:57.618 128+0 records out 00:14:57.618 65536 bytes (66 kB, 64 KiB) copied, 0.000884396 s, 74.1 MB/s 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:57.618 2035+0 records in 00:14:57.618 2035+0 records out 00:14:57.618 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00629976 s, 165 MB/s 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:57.618 456+0 records in 00:14:57.618 456+0 records out 00:14:57.618 233472 bytes (233 kB, 228 KiB) copied, 0.00150149 s, 155 MB/s 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:57.618 22:22:56 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:57.618 22:22:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:57.618 22:22:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:57.618 22:22:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.618 22:22:56 -- bdev/nbd_common.sh@51 -- # local i 00:14:57.618 22:22:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.618 22:22:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.877 [2024-07-14 22:22:56.742788] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@41 -- # break 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.877 22:22:56 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:57.877 22:22:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:58.136 22:22:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:58.136 22:22:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:58.136 22:22:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:58.136 22:22:56 -- bdev/nbd_common.sh@65 -- # true 00:14:58.136 22:22:56 -- bdev/nbd_common.sh@65 -- # count=0 00:14:58.136 22:22:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:58.136 22:22:56 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:58.136 22:22:56 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:58.136 22:22:56 -- bdev/bdev_raid.sh@111 -- # killprocess 114312 00:14:58.136 22:22:56 -- common/autotest_common.sh@926 -- # '[' -z 114312 ']' 00:14:58.136 22:22:56 -- common/autotest_common.sh@930 -- # kill -0 114312 00:14:58.136 22:22:56 -- common/autotest_common.sh@931 -- # uname 00:14:58.137 22:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.137 22:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114312 00:14:58.137 22:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:58.137 killing process with pid 114312 00:14:58.137 22:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:58.137 22:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114312' 00:14:58.137 22:22:57 -- common/autotest_common.sh@945 -- # kill 114312 00:14:58.137 22:22:57 -- common/autotest_common.sh@950 -- # wait 114312 00:14:58.137 [2024-07-14 22:22:57.005168] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.137 [2024-07-14 22:22:57.005260] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.137 [2024-07-14 22:22:57.005497] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.137 [2024-07-14 22:22:57.005552] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:58.137 [2024-07-14 22:22:57.152630] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:59.512 00:14:59.512 real 0m4.112s 00:14:59.512 user 0m5.283s 00:14:59.512 sys 0m0.808s 00:14:59.512 22:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.512 22:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:59.512 ************************************ 00:14:59.512 END TEST raid_function_test_raid0 00:14:59.512 ************************************ 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:59.512 22:22:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:59.512 22:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:59.512 22:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:59.512 ************************************ 00:14:59.512 START TEST raid_function_test_concat 00:14:59.512 ************************************ 00:14:59.512 22:22:58 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:59.512 Process raid pid: 114467 00:14:59.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@86 -- # raid_pid=114467 00:14:59.512 22:22:58 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:59.513 22:22:58 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114467' 00:14:59.513 22:22:58 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114467 /var/tmp/spdk-raid.sock 00:14:59.513 22:22:58 -- common/autotest_common.sh@819 -- # '[' -z 114467 ']' 00:14:59.513 22:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.513 22:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:59.513 22:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.513 22:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:59.513 22:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:59.513 [2024-07-14 22:22:58.296290] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:59.513 [2024-07-14 22:22:58.296722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.513 [2024-07-14 22:22:58.454302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.771 [2024-07-14 22:22:58.702937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.030 [2024-07-14 22:22:58.899760] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.288 22:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:00.288 22:22:59 -- common/autotest_common.sh@852 -- # return 0 00:15:00.288 22:22:59 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:15:00.288 22:22:59 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:15:00.288 22:22:59 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:00.288 22:22:59 -- bdev/bdev_raid.sh@70 -- # cat 00:15:00.288 22:22:59 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:00.547 [2024-07-14 22:22:59.517371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:00.547 [2024-07-14 22:22:59.519620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:00.547 [2024-07-14 22:22:59.519843] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:00.547 [2024-07-14 22:22:59.519948] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:00.547 [2024-07-14 22:22:59.520129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:00.547 [2024-07-14 22:22:59.520596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:00.547 [2024-07-14 22:22:59.520724] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:15:00.547 [2024-07-14 22:22:59.520967] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.547 Base_1 00:15:00.547 Base_2 00:15:00.547 22:22:59 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:00.547 22:22:59 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.547 22:22:59 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:00.806 22:22:59 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:15:00.806 22:22:59 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:15:00.806 22:22:59 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@12 -- # local i 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.806 22:22:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:01.065 [2024-07-14 22:22:59.957446] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:01.065 /dev/nbd0 00:15:01.065 22:22:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.065 22:22:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.065 22:22:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:01.065 22:22:59 -- common/autotest_common.sh@857 -- # local i 00:15:01.065 22:22:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:01.065 22:22:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:01.065 22:22:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:01.065 22:22:59 -- common/autotest_common.sh@861 -- # break 00:15:01.065 22:22:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:01.065 22:22:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:01.065 22:22:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.065 1+0 records in 00:15:01.065 1+0 records out 00:15:01.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00188447 s, 2.2 MB/s 00:15:01.065 22:22:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.065 22:22:59 -- common/autotest_common.sh@874 -- # size=4096 00:15:01.065 22:22:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.065 22:22:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:01.065 22:22:59 -- common/autotest_common.sh@877 -- # return 0 00:15:01.065 22:22:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.065 22:22:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.065 22:23:00 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:01.065 22:23:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:01.065 22:23:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:01.324 { 00:15:01.324 "nbd_device": "/dev/nbd0", 00:15:01.324 "bdev_name": "raid" 00:15:01.324 } 00:15:01.324 ]' 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:01.324 { 00:15:01.324 "nbd_device": "/dev/nbd0", 00:15:01.324 "bdev_name": "raid" 00:15:01.324 } 00:15:01.324 ]' 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@65 -- # count=1 00:15:01.324 22:23:00 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@98 -- # count=1 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@20 -- # local blksize 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:15:01.325 4096+0 records in 00:15:01.325 4096+0 records out 00:15:01.325 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0216831 s, 96.7 MB/s 00:15:01.325 22:23:00 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:01.584 4096+0 records in 00:15:01.584 4096+0 records out 00:15:01.584 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.258943 s, 8.1 MB/s 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:01.584 128+0 records in 00:15:01.584 128+0 records out 00:15:01.584 65536 bytes (66 kB, 64 KiB) copied, 0.000972168 s, 67.4 MB/s 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:01.584 22:23:00 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:01.842 2035+0 records in 00:15:01.842 2035+0 records out 00:15:01.842 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00701437 s, 149 MB/s 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:01.842 456+0 records in 00:15:01.842 456+0 records out 00:15:01.842 233472 bytes (233 kB, 228 KiB) copied, 0.00167102 s, 140 MB/s 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@53 -- # return 0 00:15:01.842 22:23:00 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:01.842 22:23:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:01.842 22:23:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:15:01.842 22:23:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.842 22:23:00 -- bdev/nbd_common.sh@51 -- # local i 00:15:01.842 22:23:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.842 22:23:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:02.100 22:23:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.101 [2024-07-14 22:23:00.969629] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@41 -- # break 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.101 22:23:00 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:02.101 22:23:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@65 -- # true 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@65 -- # count=0 00:15:02.360 22:23:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:02.360 22:23:01 -- bdev/bdev_raid.sh@106 -- # count=0 00:15:02.360 22:23:01 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:15:02.360 22:23:01 -- bdev/bdev_raid.sh@111 -- # killprocess 114467 00:15:02.360 22:23:01 -- common/autotest_common.sh@926 -- # '[' -z 114467 ']' 00:15:02.360 22:23:01 -- common/autotest_common.sh@930 -- # kill -0 114467 00:15:02.360 22:23:01 -- common/autotest_common.sh@931 -- # uname 00:15:02.360 22:23:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.360 22:23:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114467 00:15:02.360 killing process with pid 114467 00:15:02.360 22:23:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.360 22:23:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.360 22:23:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114467' 00:15:02.360 22:23:01 -- common/autotest_common.sh@945 -- # kill 114467 00:15:02.360 22:23:01 -- common/autotest_common.sh@950 -- # wait 114467 00:15:02.360 [2024-07-14 22:23:01.243343] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.360 [2024-07-14 22:23:01.243537] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.360 [2024-07-14 22:23:01.243650] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.360 [2024-07-14 22:23:01.243671] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:15:02.360 [2024-07-14 22:23:01.389393] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.750 ************************************ 00:15:03.750 END TEST raid_function_test_concat 00:15:03.750 ************************************ 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:03.750 00:15:03.750 real 0m4.193s 00:15:03.750 user 0m5.246s 00:15:03.750 sys 0m0.981s 00:15:03.750 22:23:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.750 22:23:02 -- common/autotest_common.sh@10 -- # set +x 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:15:03.750 22:23:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:03.750 22:23:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:03.750 22:23:02 -- common/autotest_common.sh@10 -- # set +x 00:15:03.750 ************************************ 00:15:03.750 START TEST raid0_resize_test 00:15:03.750 ************************************ 00:15:03.750 22:23:02 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@301 -- # raid_pid=114631 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 114631' 00:15:03.750 Process raid pid: 114631 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@303 -- # waitforlisten 114631 /var/tmp/spdk-raid.sock 00:15:03.750 22:23:02 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:03.750 22:23:02 -- common/autotest_common.sh@819 -- # '[' -z 114631 ']' 00:15:03.750 22:23:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:03.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:03.750 22:23:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:03.750 22:23:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:03.750 22:23:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:03.750 22:23:02 -- common/autotest_common.sh@10 -- # set +x 00:15:03.750 [2024-07-14 22:23:02.550109] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:03.750 [2024-07-14 22:23:02.550839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.750 [2024-07-14 22:23:02.726222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.017 [2024-07-14 22:23:02.958668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.275 [2024-07-14 22:23:03.152891] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.533 22:23:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:04.533 22:23:03 -- common/autotest_common.sh@852 -- # return 0 00:15:04.533 22:23:03 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:04.791 Base_1 00:15:04.791 22:23:03 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:05.048 Base_2 00:15:05.048 22:23:03 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:15:05.048 [2024-07-14 22:23:04.086369] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:05.048 [2024-07-14 22:23:04.088310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:05.048 [2024-07-14 22:23:04.088379] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:05.048 [2024-07-14 22:23:04.088392] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:05.048 [2024-07-14 22:23:04.088556] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:15:05.048 [2024-07-14 22:23:04.088891] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:05.048 [2024-07-14 22:23:04.088904] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:15:05.048 [2024-07-14 22:23:04.089055] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.048 22:23:04 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:05.306 [2024-07-14 22:23:04.278409] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:05.306 [2024-07-14 22:23:04.278438] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:05.306 true 00:15:05.306 22:23:04 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:05.306 22:23:04 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:15:05.564 [2024-07-14 22:23:04.474556] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.564 22:23:04 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:15:05.564 22:23:04 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:15:05.564 22:23:04 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:15:05.564 22:23:04 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:05.822 [2024-07-14 22:23:04.670460] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:05.822 [2024-07-14 22:23:04.670486] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:05.822 [2024-07-14 22:23:04.670557] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:15:05.822 [2024-07-14 22:23:04.670631] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:05.822 true 00:15:05.822 22:23:04 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:15:05.822 22:23:04 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:05.822 [2024-07-14 22:23:04.862595] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.822 22:23:04 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:15:05.822 22:23:04 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:15:05.822 22:23:04 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:15:05.822 22:23:04 -- bdev/bdev_raid.sh@332 -- # killprocess 114631 00:15:05.822 22:23:04 -- common/autotest_common.sh@926 -- # '[' -z 114631 ']' 00:15:05.822 22:23:04 -- common/autotest_common.sh@930 -- # kill -0 114631 00:15:05.822 22:23:04 -- common/autotest_common.sh@931 -- # uname 00:15:05.822 22:23:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.822 22:23:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114631 00:15:06.080 22:23:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.080 killing process with pid 114631 00:15:06.080 22:23:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.080 22:23:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114631' 00:15:06.080 22:23:04 -- common/autotest_common.sh@945 -- # kill 114631 00:15:06.080 22:23:04 -- common/autotest_common.sh@950 -- # wait 114631 00:15:06.080 [2024-07-14 22:23:04.896568] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.080 [2024-07-14 22:23:04.896688] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.080 [2024-07-14 22:23:04.896733] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.080 [2024-07-14 22:23:04.896742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:15:06.080 [2024-07-14 22:23:04.897311] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.016 22:23:05 -- bdev/bdev_raid.sh@334 -- # return 0 00:15:07.016 ************************************ 00:15:07.016 END TEST raid0_resize_test 00:15:07.016 ************************************ 00:15:07.016 00:15:07.016 real 0m3.467s 00:15:07.016 user 0m4.796s 00:15:07.016 sys 0m0.511s 00:15:07.016 22:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.016 22:23:05 -- common/autotest_common.sh@10 -- # set +x 00:15:07.016 22:23:05 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:07.016 22:23:05 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:07.016 22:23:05 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:07.016 22:23:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:07.016 22:23:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.017 22:23:05 -- common/autotest_common.sh@10 -- # set +x 00:15:07.017 ************************************ 00:15:07.017 START TEST raid_state_function_test 00:15:07.017 ************************************ 00:15:07.017 22:23:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:07.017 Process raid pid: 114726 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=114726 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114726' 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114726 /var/tmp/spdk-raid.sock 00:15:07.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:07.017 22:23:06 -- common/autotest_common.sh@819 -- # '[' -z 114726 ']' 00:15:07.017 22:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:07.017 22:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.017 22:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:07.017 22:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.017 22:23:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.017 22:23:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:07.274 [2024-07-14 22:23:06.079366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:07.274 [2024-07-14 22:23:06.079576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.274 [2024-07-14 22:23:06.247481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.532 [2024-07-14 22:23:06.449191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.791 [2024-07-14 22:23:06.642584] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.049 22:23:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.049 22:23:06 -- common/autotest_common.sh@852 -- # return 0 00:15:08.049 22:23:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:08.307 [2024-07-14 22:23:07.224247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.307 [2024-07-14 22:23:07.224386] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.307 [2024-07-14 22:23:07.224402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.307 [2024-07-14 22:23:07.224423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.307 22:23:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.565 22:23:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.565 "name": "Existed_Raid", 00:15:08.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.565 "strip_size_kb": 64, 00:15:08.565 "state": "configuring", 00:15:08.565 "raid_level": "raid0", 00:15:08.565 "superblock": false, 00:15:08.565 "num_base_bdevs": 2, 00:15:08.565 "num_base_bdevs_discovered": 0, 00:15:08.565 "num_base_bdevs_operational": 2, 00:15:08.565 "base_bdevs_list": [ 00:15:08.565 { 00:15:08.565 "name": "BaseBdev1", 00:15:08.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.565 "is_configured": false, 00:15:08.565 "data_offset": 0, 00:15:08.565 "data_size": 0 00:15:08.565 }, 00:15:08.565 { 00:15:08.565 "name": "BaseBdev2", 00:15:08.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.565 "is_configured": false, 00:15:08.565 "data_offset": 0, 00:15:08.565 "data_size": 0 00:15:08.565 } 00:15:08.565 ] 00:15:08.565 }' 00:15:08.565 22:23:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.565 22:23:07 -- common/autotest_common.sh@10 -- # set +x 00:15:09.130 22:23:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:09.388 [2024-07-14 22:23:08.300512] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.388 [2024-07-14 22:23:08.300571] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:09.388 22:23:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:09.646 [2024-07-14 22:23:08.532604] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.646 [2024-07-14 22:23:08.532740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.646 [2024-07-14 22:23:08.532772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.646 [2024-07-14 22:23:08.532798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.646 22:23:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:09.904 [2024-07-14 22:23:08.747857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.904 BaseBdev1 00:15:09.904 22:23:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:09.904 22:23:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:09.904 22:23:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.904 22:23:08 -- common/autotest_common.sh@889 -- # local i 00:15:09.904 22:23:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.904 22:23:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.904 22:23:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.904 22:23:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.162 [ 00:15:10.162 { 00:15:10.162 "name": "BaseBdev1", 00:15:10.162 "aliases": [ 00:15:10.162 "c2b7e6d6-36da-4415-b294-13c7c71dee8b" 00:15:10.162 ], 00:15:10.162 "product_name": "Malloc disk", 00:15:10.162 "block_size": 512, 00:15:10.162 "num_blocks": 65536, 00:15:10.162 "uuid": "c2b7e6d6-36da-4415-b294-13c7c71dee8b", 00:15:10.162 "assigned_rate_limits": { 00:15:10.162 "rw_ios_per_sec": 0, 00:15:10.162 "rw_mbytes_per_sec": 0, 00:15:10.162 "r_mbytes_per_sec": 0, 00:15:10.162 "w_mbytes_per_sec": 0 00:15:10.162 }, 00:15:10.162 "claimed": true, 00:15:10.162 "claim_type": "exclusive_write", 00:15:10.162 "zoned": false, 00:15:10.162 "supported_io_types": { 00:15:10.162 "read": true, 00:15:10.162 "write": true, 00:15:10.162 "unmap": true, 00:15:10.162 "write_zeroes": true, 00:15:10.162 "flush": true, 00:15:10.162 "reset": true, 00:15:10.162 "compare": false, 00:15:10.162 "compare_and_write": false, 00:15:10.162 "abort": true, 00:15:10.162 "nvme_admin": false, 00:15:10.162 "nvme_io": false 00:15:10.162 }, 00:15:10.162 "memory_domains": [ 00:15:10.162 { 00:15:10.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.162 "dma_device_type": 2 00:15:10.162 } 00:15:10.162 ], 00:15:10.162 "driver_specific": {} 00:15:10.162 } 00:15:10.162 ] 00:15:10.162 22:23:09 -- common/autotest_common.sh@895 -- # return 0 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.162 22:23:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.420 22:23:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.420 "name": "Existed_Raid", 00:15:10.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.420 "strip_size_kb": 64, 00:15:10.420 "state": "configuring", 00:15:10.420 "raid_level": "raid0", 00:15:10.420 "superblock": false, 00:15:10.420 "num_base_bdevs": 2, 00:15:10.420 "num_base_bdevs_discovered": 1, 00:15:10.420 "num_base_bdevs_operational": 2, 00:15:10.420 "base_bdevs_list": [ 00:15:10.420 { 00:15:10.420 "name": "BaseBdev1", 00:15:10.420 "uuid": "c2b7e6d6-36da-4415-b294-13c7c71dee8b", 00:15:10.420 "is_configured": true, 00:15:10.420 "data_offset": 0, 00:15:10.420 "data_size": 65536 00:15:10.420 }, 00:15:10.420 { 00:15:10.420 "name": "BaseBdev2", 00:15:10.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.420 "is_configured": false, 00:15:10.420 "data_offset": 0, 00:15:10.420 "data_size": 0 00:15:10.420 } 00:15:10.420 ] 00:15:10.420 }' 00:15:10.420 22:23:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.420 22:23:09 -- common/autotest_common.sh@10 -- # set +x 00:15:10.989 22:23:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.247 [2024-07-14 22:23:10.280213] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.247 [2024-07-14 22:23:10.280301] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:11.247 22:23:10 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:11.247 22:23:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:11.505 [2024-07-14 22:23:10.468272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.505 [2024-07-14 22:23:10.470462] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.505 [2024-07-14 22:23:10.470544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.505 22:23:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.763 22:23:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.763 "name": "Existed_Raid", 00:15:11.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.763 "strip_size_kb": 64, 00:15:11.763 "state": "configuring", 00:15:11.763 "raid_level": "raid0", 00:15:11.763 "superblock": false, 00:15:11.763 "num_base_bdevs": 2, 00:15:11.763 "num_base_bdevs_discovered": 1, 00:15:11.763 "num_base_bdevs_operational": 2, 00:15:11.763 "base_bdevs_list": [ 00:15:11.763 { 00:15:11.763 "name": "BaseBdev1", 00:15:11.763 "uuid": "c2b7e6d6-36da-4415-b294-13c7c71dee8b", 00:15:11.763 "is_configured": true, 00:15:11.763 "data_offset": 0, 00:15:11.763 "data_size": 65536 00:15:11.763 }, 00:15:11.763 { 00:15:11.763 "name": "BaseBdev2", 00:15:11.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.763 "is_configured": false, 00:15:11.763 "data_offset": 0, 00:15:11.763 "data_size": 0 00:15:11.763 } 00:15:11.763 ] 00:15:11.763 }' 00:15:11.764 22:23:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.764 22:23:10 -- common/autotest_common.sh@10 -- # set +x 00:15:12.339 22:23:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:12.906 [2024-07-14 22:23:11.690006] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.906 [2024-07-14 22:23:11.690107] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:12.906 [2024-07-14 22:23:11.690170] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:12.906 [2024-07-14 22:23:11.690399] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:12.906 [2024-07-14 22:23:11.690958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:12.906 [2024-07-14 22:23:11.691093] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:12.907 [2024-07-14 22:23:11.691460] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.907 BaseBdev2 00:15:12.907 22:23:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:12.907 22:23:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:12.907 22:23:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:12.907 22:23:11 -- common/autotest_common.sh@889 -- # local i 00:15:12.907 22:23:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:12.907 22:23:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:12.907 22:23:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.907 22:23:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.165 [ 00:15:13.165 { 00:15:13.165 "name": "BaseBdev2", 00:15:13.165 "aliases": [ 00:15:13.165 "f656f45e-d748-463d-94ee-6e4b5bffbc18" 00:15:13.165 ], 00:15:13.165 "product_name": "Malloc disk", 00:15:13.165 "block_size": 512, 00:15:13.165 "num_blocks": 65536, 00:15:13.165 "uuid": "f656f45e-d748-463d-94ee-6e4b5bffbc18", 00:15:13.165 "assigned_rate_limits": { 00:15:13.165 "rw_ios_per_sec": 0, 00:15:13.165 "rw_mbytes_per_sec": 0, 00:15:13.165 "r_mbytes_per_sec": 0, 00:15:13.165 "w_mbytes_per_sec": 0 00:15:13.165 }, 00:15:13.165 "claimed": true, 00:15:13.165 "claim_type": "exclusive_write", 00:15:13.165 "zoned": false, 00:15:13.165 "supported_io_types": { 00:15:13.165 "read": true, 00:15:13.165 "write": true, 00:15:13.165 "unmap": true, 00:15:13.165 "write_zeroes": true, 00:15:13.165 "flush": true, 00:15:13.165 "reset": true, 00:15:13.165 "compare": false, 00:15:13.165 "compare_and_write": false, 00:15:13.165 "abort": true, 00:15:13.165 "nvme_admin": false, 00:15:13.165 "nvme_io": false 00:15:13.165 }, 00:15:13.165 "memory_domains": [ 00:15:13.165 { 00:15:13.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.165 "dma_device_type": 2 00:15:13.165 } 00:15:13.165 ], 00:15:13.165 "driver_specific": {} 00:15:13.165 } 00:15:13.165 ] 00:15:13.165 22:23:12 -- common/autotest_common.sh@895 -- # return 0 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.165 22:23:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.166 22:23:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.166 22:23:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.166 22:23:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.166 22:23:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.166 22:23:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.424 22:23:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.424 "name": "Existed_Raid", 00:15:13.424 "uuid": "84ecb2fe-388d-4beb-901a-f48fd290f2e2", 00:15:13.424 "strip_size_kb": 64, 00:15:13.424 "state": "online", 00:15:13.424 "raid_level": "raid0", 00:15:13.424 "superblock": false, 00:15:13.424 "num_base_bdevs": 2, 00:15:13.424 "num_base_bdevs_discovered": 2, 00:15:13.424 "num_base_bdevs_operational": 2, 00:15:13.424 "base_bdevs_list": [ 00:15:13.424 { 00:15:13.424 "name": "BaseBdev1", 00:15:13.424 "uuid": "c2b7e6d6-36da-4415-b294-13c7c71dee8b", 00:15:13.424 "is_configured": true, 00:15:13.424 "data_offset": 0, 00:15:13.424 "data_size": 65536 00:15:13.424 }, 00:15:13.424 { 00:15:13.424 "name": "BaseBdev2", 00:15:13.424 "uuid": "f656f45e-d748-463d-94ee-6e4b5bffbc18", 00:15:13.424 "is_configured": true, 00:15:13.424 "data_offset": 0, 00:15:13.424 "data_size": 65536 00:15:13.424 } 00:15:13.424 ] 00:15:13.424 }' 00:15:13.424 22:23:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.424 22:23:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.361 [2024-07-14 22:23:13.306498] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.361 [2024-07-14 22:23:13.306536] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.361 [2024-07-14 22:23:13.306613] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.361 22:23:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.620 22:23:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.620 "name": "Existed_Raid", 00:15:14.620 "uuid": "84ecb2fe-388d-4beb-901a-f48fd290f2e2", 00:15:14.620 "strip_size_kb": 64, 00:15:14.620 "state": "offline", 00:15:14.620 "raid_level": "raid0", 00:15:14.620 "superblock": false, 00:15:14.620 "num_base_bdevs": 2, 00:15:14.620 "num_base_bdevs_discovered": 1, 00:15:14.620 "num_base_bdevs_operational": 1, 00:15:14.620 "base_bdevs_list": [ 00:15:14.620 { 00:15:14.620 "name": null, 00:15:14.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.620 "is_configured": false, 00:15:14.620 "data_offset": 0, 00:15:14.620 "data_size": 65536 00:15:14.620 }, 00:15:14.620 { 00:15:14.620 "name": "BaseBdev2", 00:15:14.620 "uuid": "f656f45e-d748-463d-94ee-6e4b5bffbc18", 00:15:14.620 "is_configured": true, 00:15:14.620 "data_offset": 0, 00:15:14.620 "data_size": 65536 00:15:14.620 } 00:15:14.620 ] 00:15:14.620 }' 00:15:14.620 22:23:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.620 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:15:15.186 22:23:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:15.186 22:23:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:15.186 22:23:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.186 22:23:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:15.444 22:23:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:15.444 22:23:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.444 22:23:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:15.702 [2024-07-14 22:23:14.583313] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.702 [2024-07-14 22:23:14.583378] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:15.702 22:23:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:15.702 22:23:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:15.702 22:23:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:15.702 22:23:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.962 22:23:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:15.962 22:23:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:15.962 22:23:14 -- bdev/bdev_raid.sh@287 -- # killprocess 114726 00:15:15.962 22:23:14 -- common/autotest_common.sh@926 -- # '[' -z 114726 ']' 00:15:15.962 22:23:14 -- common/autotest_common.sh@930 -- # kill -0 114726 00:15:15.962 22:23:14 -- common/autotest_common.sh@931 -- # uname 00:15:15.962 22:23:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:15.962 22:23:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114726 00:15:15.962 killing process with pid 114726 00:15:15.962 22:23:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:15.962 22:23:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:15.962 22:23:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114726' 00:15:15.962 22:23:14 -- common/autotest_common.sh@945 -- # kill 114726 00:15:15.962 22:23:14 -- common/autotest_common.sh@950 -- # wait 114726 00:15:15.962 [2024-07-14 22:23:14.927149] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.962 [2024-07-14 22:23:14.927598] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.361 ************************************ 00:15:17.361 END TEST raid_state_function_test 00:15:17.361 ************************************ 00:15:17.361 22:23:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:17.361 00:15:17.361 real 0m10.022s 00:15:17.361 user 0m17.407s 00:15:17.361 sys 0m1.160s 00:15:17.361 22:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.361 22:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.361 22:23:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:17.361 22:23:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:17.361 22:23:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.361 22:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.361 ************************************ 00:15:17.361 START TEST raid_state_function_test_sb 00:15:17.361 ************************************ 00:15:17.361 22:23:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:15:17.361 22:23:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=115055 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115055' 00:15:17.362 Process raid pid: 115055 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115055 /var/tmp/spdk-raid.sock 00:15:17.362 22:23:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.362 22:23:16 -- common/autotest_common.sh@819 -- # '[' -z 115055 ']' 00:15:17.362 22:23:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.362 22:23:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.362 22:23:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.362 22:23:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.362 22:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.362 [2024-07-14 22:23:16.158303] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:17.362 [2024-07-14 22:23:16.158492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.362 [2024-07-14 22:23:16.326585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.621 [2024-07-14 22:23:16.542296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.879 [2024-07-14 22:23:16.731587] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.138 22:23:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.138 22:23:17 -- common/autotest_common.sh@852 -- # return 0 00:15:18.138 22:23:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:18.396 [2024-07-14 22:23:17.311351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.396 [2024-07-14 22:23:17.311462] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.396 [2024-07-14 22:23:17.311475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.396 [2024-07-14 22:23:17.311495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.396 22:23:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:18.396 22:23:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.397 22:23:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.655 22:23:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.655 "name": "Existed_Raid", 00:15:18.655 "uuid": "0d14da4a-94f3-4339-9edd-e424267cd2f4", 00:15:18.655 "strip_size_kb": 64, 00:15:18.655 "state": "configuring", 00:15:18.655 "raid_level": "raid0", 00:15:18.655 "superblock": true, 00:15:18.655 "num_base_bdevs": 2, 00:15:18.655 "num_base_bdevs_discovered": 0, 00:15:18.655 "num_base_bdevs_operational": 2, 00:15:18.655 "base_bdevs_list": [ 00:15:18.655 { 00:15:18.655 "name": "BaseBdev1", 00:15:18.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.655 "is_configured": false, 00:15:18.655 "data_offset": 0, 00:15:18.655 "data_size": 0 00:15:18.655 }, 00:15:18.655 { 00:15:18.655 "name": "BaseBdev2", 00:15:18.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.655 "is_configured": false, 00:15:18.655 "data_offset": 0, 00:15:18.655 "data_size": 0 00:15:18.655 } 00:15:18.655 ] 00:15:18.655 }' 00:15:18.655 22:23:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.655 22:23:17 -- common/autotest_common.sh@10 -- # set +x 00:15:19.222 22:23:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.481 [2024-07-14 22:23:18.395396] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.481 [2024-07-14 22:23:18.395453] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:19.481 22:23:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.739 [2024-07-14 22:23:18.575480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.739 [2024-07-14 22:23:18.575563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.739 [2024-07-14 22:23:18.575591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.739 [2024-07-14 22:23:18.575629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.739 22:23:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.739 [2024-07-14 22:23:18.791232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.739 BaseBdev1 00:15:19.997 22:23:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:19.997 22:23:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:19.997 22:23:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:19.997 22:23:18 -- common/autotest_common.sh@889 -- # local i 00:15:19.997 22:23:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:19.997 22:23:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:19.997 22:23:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.997 22:23:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.256 [ 00:15:20.256 { 00:15:20.256 "name": "BaseBdev1", 00:15:20.256 "aliases": [ 00:15:20.256 "f02a7ee2-378d-4d3d-9234-368aa91fe0d2" 00:15:20.256 ], 00:15:20.256 "product_name": "Malloc disk", 00:15:20.256 "block_size": 512, 00:15:20.256 "num_blocks": 65536, 00:15:20.256 "uuid": "f02a7ee2-378d-4d3d-9234-368aa91fe0d2", 00:15:20.256 "assigned_rate_limits": { 00:15:20.256 "rw_ios_per_sec": 0, 00:15:20.256 "rw_mbytes_per_sec": 0, 00:15:20.256 "r_mbytes_per_sec": 0, 00:15:20.256 "w_mbytes_per_sec": 0 00:15:20.256 }, 00:15:20.256 "claimed": true, 00:15:20.256 "claim_type": "exclusive_write", 00:15:20.256 "zoned": false, 00:15:20.256 "supported_io_types": { 00:15:20.256 "read": true, 00:15:20.256 "write": true, 00:15:20.256 "unmap": true, 00:15:20.256 "write_zeroes": true, 00:15:20.256 "flush": true, 00:15:20.256 "reset": true, 00:15:20.256 "compare": false, 00:15:20.256 "compare_and_write": false, 00:15:20.256 "abort": true, 00:15:20.256 "nvme_admin": false, 00:15:20.256 "nvme_io": false 00:15:20.256 }, 00:15:20.256 "memory_domains": [ 00:15:20.256 { 00:15:20.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.256 "dma_device_type": 2 00:15:20.256 } 00:15:20.256 ], 00:15:20.256 "driver_specific": {} 00:15:20.256 } 00:15:20.256 ] 00:15:20.256 22:23:19 -- common/autotest_common.sh@895 -- # return 0 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.256 22:23:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.515 22:23:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.515 "name": "Existed_Raid", 00:15:20.515 "uuid": "ca8e9ffb-aa27-434c-8fe6-2aea4072fc2a", 00:15:20.515 "strip_size_kb": 64, 00:15:20.515 "state": "configuring", 00:15:20.515 "raid_level": "raid0", 00:15:20.515 "superblock": true, 00:15:20.515 "num_base_bdevs": 2, 00:15:20.515 "num_base_bdevs_discovered": 1, 00:15:20.515 "num_base_bdevs_operational": 2, 00:15:20.515 "base_bdevs_list": [ 00:15:20.515 { 00:15:20.515 "name": "BaseBdev1", 00:15:20.515 "uuid": "f02a7ee2-378d-4d3d-9234-368aa91fe0d2", 00:15:20.515 "is_configured": true, 00:15:20.515 "data_offset": 2048, 00:15:20.515 "data_size": 63488 00:15:20.515 }, 00:15:20.515 { 00:15:20.515 "name": "BaseBdev2", 00:15:20.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.515 "is_configured": false, 00:15:20.516 "data_offset": 0, 00:15:20.516 "data_size": 0 00:15:20.516 } 00:15:20.516 ] 00:15:20.516 }' 00:15:20.516 22:23:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.516 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:15:21.082 22:23:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.341 [2024-07-14 22:23:20.255554] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.341 [2024-07-14 22:23:20.255627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:21.341 22:23:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:21.341 22:23:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:21.609 22:23:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.872 BaseBdev1 00:15:21.872 22:23:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:21.872 22:23:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:21.872 22:23:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:21.872 22:23:20 -- common/autotest_common.sh@889 -- # local i 00:15:21.872 22:23:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:21.872 22:23:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:21.872 22:23:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.128 22:23:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.387 [ 00:15:22.387 { 00:15:22.387 "name": "BaseBdev1", 00:15:22.387 "aliases": [ 00:15:22.387 "cb6e1250-5a1c-40c4-8f16-81776b457aaf" 00:15:22.387 ], 00:15:22.387 "product_name": "Malloc disk", 00:15:22.387 "block_size": 512, 00:15:22.387 "num_blocks": 65536, 00:15:22.387 "uuid": "cb6e1250-5a1c-40c4-8f16-81776b457aaf", 00:15:22.387 "assigned_rate_limits": { 00:15:22.387 "rw_ios_per_sec": 0, 00:15:22.387 "rw_mbytes_per_sec": 0, 00:15:22.387 "r_mbytes_per_sec": 0, 00:15:22.387 "w_mbytes_per_sec": 0 00:15:22.387 }, 00:15:22.387 "claimed": false, 00:15:22.387 "zoned": false, 00:15:22.387 "supported_io_types": { 00:15:22.387 "read": true, 00:15:22.387 "write": true, 00:15:22.387 "unmap": true, 00:15:22.387 "write_zeroes": true, 00:15:22.387 "flush": true, 00:15:22.387 "reset": true, 00:15:22.387 "compare": false, 00:15:22.387 "compare_and_write": false, 00:15:22.387 "abort": true, 00:15:22.387 "nvme_admin": false, 00:15:22.387 "nvme_io": false 00:15:22.387 }, 00:15:22.387 "memory_domains": [ 00:15:22.387 { 00:15:22.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.387 "dma_device_type": 2 00:15:22.387 } 00:15:22.387 ], 00:15:22.387 "driver_specific": {} 00:15:22.387 } 00:15:22.387 ] 00:15:22.387 22:23:21 -- common/autotest_common.sh@895 -- # return 0 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.387 [2024-07-14 22:23:21.405762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.387 [2024-07-14 22:23:21.407850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.387 [2024-07-14 22:23:21.407951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.387 22:23:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.645 22:23:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.645 "name": "Existed_Raid", 00:15:22.645 "uuid": "3d9432d2-4fcb-4bf1-b55c-4fbbde7fb4c9", 00:15:22.645 "strip_size_kb": 64, 00:15:22.645 "state": "configuring", 00:15:22.645 "raid_level": "raid0", 00:15:22.645 "superblock": true, 00:15:22.645 "num_base_bdevs": 2, 00:15:22.645 "num_base_bdevs_discovered": 1, 00:15:22.645 "num_base_bdevs_operational": 2, 00:15:22.645 "base_bdevs_list": [ 00:15:22.645 { 00:15:22.645 "name": "BaseBdev1", 00:15:22.645 "uuid": "cb6e1250-5a1c-40c4-8f16-81776b457aaf", 00:15:22.645 "is_configured": true, 00:15:22.645 "data_offset": 2048, 00:15:22.645 "data_size": 63488 00:15:22.646 }, 00:15:22.646 { 00:15:22.646 "name": "BaseBdev2", 00:15:22.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.646 "is_configured": false, 00:15:22.646 "data_offset": 0, 00:15:22.646 "data_size": 0 00:15:22.646 } 00:15:22.646 ] 00:15:22.646 }' 00:15:22.646 22:23:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.646 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:15:23.582 22:23:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.582 [2024-07-14 22:23:22.537211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.582 [2024-07-14 22:23:22.537487] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:23.582 [2024-07-14 22:23:22.537502] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.582 [2024-07-14 22:23:22.537654] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:23.582 BaseBdev2 00:15:23.582 [2024-07-14 22:23:22.538071] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:23.582 [2024-07-14 22:23:22.538100] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:23.582 [2024-07-14 22:23:22.538266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.582 22:23:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:23.582 22:23:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:23.582 22:23:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:23.582 22:23:22 -- common/autotest_common.sh@889 -- # local i 00:15:23.582 22:23:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:23.582 22:23:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:23.582 22:23:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.841 22:23:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.100 [ 00:15:24.100 { 00:15:24.100 "name": "BaseBdev2", 00:15:24.100 "aliases": [ 00:15:24.100 "1526d067-8f9f-4890-a546-54dcfa155eef" 00:15:24.100 ], 00:15:24.100 "product_name": "Malloc disk", 00:15:24.100 "block_size": 512, 00:15:24.100 "num_blocks": 65536, 00:15:24.100 "uuid": "1526d067-8f9f-4890-a546-54dcfa155eef", 00:15:24.100 "assigned_rate_limits": { 00:15:24.100 "rw_ios_per_sec": 0, 00:15:24.100 "rw_mbytes_per_sec": 0, 00:15:24.100 "r_mbytes_per_sec": 0, 00:15:24.100 "w_mbytes_per_sec": 0 00:15:24.100 }, 00:15:24.100 "claimed": true, 00:15:24.100 "claim_type": "exclusive_write", 00:15:24.100 "zoned": false, 00:15:24.100 "supported_io_types": { 00:15:24.100 "read": true, 00:15:24.100 "write": true, 00:15:24.100 "unmap": true, 00:15:24.100 "write_zeroes": true, 00:15:24.100 "flush": true, 00:15:24.100 "reset": true, 00:15:24.100 "compare": false, 00:15:24.100 "compare_and_write": false, 00:15:24.100 "abort": true, 00:15:24.100 "nvme_admin": false, 00:15:24.100 "nvme_io": false 00:15:24.100 }, 00:15:24.100 "memory_domains": [ 00:15:24.100 { 00:15:24.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.100 "dma_device_type": 2 00:15:24.100 } 00:15:24.100 ], 00:15:24.100 "driver_specific": {} 00:15:24.100 } 00:15:24.100 ] 00:15:24.100 22:23:22 -- common/autotest_common.sh@895 -- # return 0 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.100 22:23:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.100 22:23:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.100 "name": "Existed_Raid", 00:15:24.100 "uuid": "3d9432d2-4fcb-4bf1-b55c-4fbbde7fb4c9", 00:15:24.100 "strip_size_kb": 64, 00:15:24.100 "state": "online", 00:15:24.100 "raid_level": "raid0", 00:15:24.100 "superblock": true, 00:15:24.100 "num_base_bdevs": 2, 00:15:24.100 "num_base_bdevs_discovered": 2, 00:15:24.100 "num_base_bdevs_operational": 2, 00:15:24.100 "base_bdevs_list": [ 00:15:24.100 { 00:15:24.100 "name": "BaseBdev1", 00:15:24.100 "uuid": "cb6e1250-5a1c-40c4-8f16-81776b457aaf", 00:15:24.100 "is_configured": true, 00:15:24.100 "data_offset": 2048, 00:15:24.100 "data_size": 63488 00:15:24.100 }, 00:15:24.100 { 00:15:24.100 "name": "BaseBdev2", 00:15:24.100 "uuid": "1526d067-8f9f-4890-a546-54dcfa155eef", 00:15:24.100 "is_configured": true, 00:15:24.100 "data_offset": 2048, 00:15:24.100 "data_size": 63488 00:15:24.100 } 00:15:24.100 ] 00:15:24.100 }' 00:15:24.100 22:23:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.100 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:15:25.048 22:23:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:25.048 [2024-07-14 22:23:24.097682] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.048 [2024-07-14 22:23:24.097741] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.048 [2024-07-14 22:23:24.097825] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.310 22:23:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.566 22:23:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.566 "name": "Existed_Raid", 00:15:25.566 "uuid": "3d9432d2-4fcb-4bf1-b55c-4fbbde7fb4c9", 00:15:25.566 "strip_size_kb": 64, 00:15:25.566 "state": "offline", 00:15:25.566 "raid_level": "raid0", 00:15:25.566 "superblock": true, 00:15:25.566 "num_base_bdevs": 2, 00:15:25.566 "num_base_bdevs_discovered": 1, 00:15:25.566 "num_base_bdevs_operational": 1, 00:15:25.566 "base_bdevs_list": [ 00:15:25.566 { 00:15:25.566 "name": null, 00:15:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.566 "is_configured": false, 00:15:25.566 "data_offset": 2048, 00:15:25.567 "data_size": 63488 00:15:25.567 }, 00:15:25.567 { 00:15:25.567 "name": "BaseBdev2", 00:15:25.567 "uuid": "1526d067-8f9f-4890-a546-54dcfa155eef", 00:15:25.567 "is_configured": true, 00:15:25.567 "data_offset": 2048, 00:15:25.567 "data_size": 63488 00:15:25.567 } 00:15:25.567 ] 00:15:25.567 }' 00:15:25.567 22:23:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.567 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:15:26.131 22:23:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:26.131 22:23:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.131 22:23:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.131 22:23:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:26.388 22:23:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:26.388 22:23:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.388 22:23:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:26.645 [2024-07-14 22:23:25.501596] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.645 [2024-07-14 22:23:25.501668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:26.645 22:23:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.645 22:23:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.645 22:23:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.645 22:23:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.903 22:23:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:26.903 22:23:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:26.903 22:23:25 -- bdev/bdev_raid.sh@287 -- # killprocess 115055 00:15:26.903 22:23:25 -- common/autotest_common.sh@926 -- # '[' -z 115055 ']' 00:15:26.903 22:23:25 -- common/autotest_common.sh@930 -- # kill -0 115055 00:15:26.903 22:23:25 -- common/autotest_common.sh@931 -- # uname 00:15:26.903 22:23:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:26.903 22:23:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115055 00:15:26.903 killing process with pid 115055 00:15:26.903 22:23:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:26.903 22:23:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:26.903 22:23:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115055' 00:15:26.903 22:23:25 -- common/autotest_common.sh@945 -- # kill 115055 00:15:26.903 22:23:25 -- common/autotest_common.sh@950 -- # wait 115055 00:15:26.903 [2024-07-14 22:23:25.784048] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.903 [2024-07-14 22:23:25.784248] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.835 ************************************ 00:15:27.835 END TEST raid_state_function_test_sb 00:15:27.835 ************************************ 00:15:27.835 22:23:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:27.835 00:15:27.835 real 0m10.753s 00:15:27.835 user 0m18.668s 00:15:27.835 sys 0m1.328s 00:15:27.835 22:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.835 22:23:26 -- common/autotest_common.sh@10 -- # set +x 00:15:27.835 22:23:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:27.835 22:23:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:27.835 22:23:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:27.835 22:23:26 -- common/autotest_common.sh@10 -- # set +x 00:15:28.094 ************************************ 00:15:28.094 START TEST raid_superblock_test 00:15:28.094 ************************************ 00:15:28.094 22:23:26 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=115401 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115401 /var/tmp/spdk-raid.sock 00:15:28.094 22:23:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:28.094 22:23:26 -- common/autotest_common.sh@819 -- # '[' -z 115401 ']' 00:15:28.094 22:23:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.094 22:23:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.094 22:23:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.094 22:23:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.094 22:23:26 -- common/autotest_common.sh@10 -- # set +x 00:15:28.094 [2024-07-14 22:23:26.965057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:28.094 [2024-07-14 22:23:26.965261] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115401 ] 00:15:28.094 [2024-07-14 22:23:27.131143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.352 [2024-07-14 22:23:27.345925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.609 [2024-07-14 22:23:27.532648] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.868 22:23:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:28.868 22:23:27 -- common/autotest_common.sh@852 -- # return 0 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.868 22:23:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:29.126 malloc1 00:15:29.126 22:23:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.384 [2024-07-14 22:23:28.303945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.384 [2024-07-14 22:23:28.304055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.384 [2024-07-14 22:23:28.304117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:29.384 [2024-07-14 22:23:28.304221] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.384 [2024-07-14 22:23:28.306714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.384 [2024-07-14 22:23:28.306760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.384 pt1 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.384 22:23:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:29.642 malloc2 00:15:29.642 22:23:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.900 [2024-07-14 22:23:28.817665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.900 [2024-07-14 22:23:28.817774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.900 [2024-07-14 22:23:28.817820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:29.900 [2024-07-14 22:23:28.817878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.900 [2024-07-14 22:23:28.820092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.900 [2024-07-14 22:23:28.820136] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.900 pt2 00:15:29.900 22:23:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:29.900 22:23:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:29.900 22:23:28 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:30.160 [2024-07-14 22:23:29.021745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:30.160 [2024-07-14 22:23:29.023723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.160 [2024-07-14 22:23:29.023945] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:30.160 [2024-07-14 22:23:29.023967] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.160 [2024-07-14 22:23:29.024098] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:30.160 [2024-07-14 22:23:29.024468] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:30.160 [2024-07-14 22:23:29.024525] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:30.160 [2024-07-14 22:23:29.024668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.160 22:23:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.423 22:23:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.423 "name": "raid_bdev1", 00:15:30.423 "uuid": "d459c763-7a02-4d9e-b419-ea301a81736a", 00:15:30.423 "strip_size_kb": 64, 00:15:30.423 "state": "online", 00:15:30.424 "raid_level": "raid0", 00:15:30.424 "superblock": true, 00:15:30.424 "num_base_bdevs": 2, 00:15:30.424 "num_base_bdevs_discovered": 2, 00:15:30.424 "num_base_bdevs_operational": 2, 00:15:30.424 "base_bdevs_list": [ 00:15:30.424 { 00:15:30.424 "name": "pt1", 00:15:30.424 "uuid": "6b54415d-51e4-50b3-b5e7-1cb33b8e2511", 00:15:30.424 "is_configured": true, 00:15:30.424 "data_offset": 2048, 00:15:30.424 "data_size": 63488 00:15:30.424 }, 00:15:30.424 { 00:15:30.424 "name": "pt2", 00:15:30.424 "uuid": "32faea14-18c4-5ac9-980c-c8a664941457", 00:15:30.424 "is_configured": true, 00:15:30.424 "data_offset": 2048, 00:15:30.424 "data_size": 63488 00:15:30.424 } 00:15:30.424 ] 00:15:30.424 }' 00:15:30.424 22:23:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.424 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:15:30.993 22:23:29 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:30.993 22:23:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:31.252 [2024-07-14 22:23:30.130238] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.252 22:23:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d459c763-7a02-4d9e-b419-ea301a81736a 00:15:31.252 22:23:30 -- bdev/bdev_raid.sh@380 -- # '[' -z d459c763-7a02-4d9e-b419-ea301a81736a ']' 00:15:31.252 22:23:30 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:31.510 [2024-07-14 22:23:30.394020] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.510 [2024-07-14 22:23:30.394059] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.510 [2024-07-14 22:23:30.394199] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.510 [2024-07-14 22:23:30.394260] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.510 [2024-07-14 22:23:30.394287] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:31.510 22:23:30 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.510 22:23:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:31.797 22:23:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:31.797 22:23:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:31.797 22:23:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.797 22:23:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:31.797 22:23:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.797 22:23:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:32.055 22:23:31 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:32.055 22:23:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.313 22:23:31 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:32.313 22:23:31 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.313 22:23:31 -- common/autotest_common.sh@640 -- # local es=0 00:15:32.313 22:23:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.313 22:23:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.313 22:23:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:32.313 22:23:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.313 22:23:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:32.313 22:23:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.313 22:23:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:32.313 22:23:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.313 22:23:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:32.313 22:23:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.571 [2024-07-14 22:23:31.430205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.571 [2024-07-14 22:23:31.432501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.571 [2024-07-14 22:23:31.432609] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:32.571 [2024-07-14 22:23:31.432706] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:32.571 [2024-07-14 22:23:31.432744] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.571 [2024-07-14 22:23:31.432756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:32.571 request: 00:15:32.571 { 00:15:32.571 "name": "raid_bdev1", 00:15:32.571 "raid_level": "raid0", 00:15:32.571 "base_bdevs": [ 00:15:32.571 "malloc1", 00:15:32.571 "malloc2" 00:15:32.571 ], 00:15:32.571 "superblock": false, 00:15:32.571 "strip_size_kb": 64, 00:15:32.571 "method": "bdev_raid_create", 00:15:32.571 "req_id": 1 00:15:32.571 } 00:15:32.571 Got JSON-RPC error response 00:15:32.571 response: 00:15:32.571 { 00:15:32.571 "code": -17, 00:15:32.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.571 } 00:15:32.571 22:23:31 -- common/autotest_common.sh@643 -- # es=1 00:15:32.571 22:23:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:32.571 22:23:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:32.571 22:23:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:32.571 22:23:31 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.571 22:23:31 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:32.571 22:23:31 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:32.571 22:23:31 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:32.571 22:23:31 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.830 [2024-07-14 22:23:31.818226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.830 [2024-07-14 22:23:31.818338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.830 [2024-07-14 22:23:31.818379] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:32.830 [2024-07-14 22:23:31.818404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.830 [2024-07-14 22:23:31.820486] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.830 [2024-07-14 22:23:31.820563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.830 [2024-07-14 22:23:31.820672] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:32.830 [2024-07-14 22:23:31.820735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.830 pt1 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.831 22:23:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.090 22:23:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.090 "name": "raid_bdev1", 00:15:33.090 "uuid": "d459c763-7a02-4d9e-b419-ea301a81736a", 00:15:33.090 "strip_size_kb": 64, 00:15:33.090 "state": "configuring", 00:15:33.090 "raid_level": "raid0", 00:15:33.090 "superblock": true, 00:15:33.090 "num_base_bdevs": 2, 00:15:33.090 "num_base_bdevs_discovered": 1, 00:15:33.090 "num_base_bdevs_operational": 2, 00:15:33.090 "base_bdevs_list": [ 00:15:33.090 { 00:15:33.090 "name": "pt1", 00:15:33.090 "uuid": "6b54415d-51e4-50b3-b5e7-1cb33b8e2511", 00:15:33.090 "is_configured": true, 00:15:33.090 "data_offset": 2048, 00:15:33.090 "data_size": 63488 00:15:33.090 }, 00:15:33.090 { 00:15:33.090 "name": null, 00:15:33.090 "uuid": "32faea14-18c4-5ac9-980c-c8a664941457", 00:15:33.090 "is_configured": false, 00:15:33.090 "data_offset": 2048, 00:15:33.090 "data_size": 63488 00:15:33.090 } 00:15:33.090 ] 00:15:33.090 }' 00:15:33.090 22:23:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.090 22:23:32 -- common/autotest_common.sh@10 -- # set +x 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.026 [2024-07-14 22:23:32.982528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.026 [2024-07-14 22:23:32.982633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.026 [2024-07-14 22:23:32.982676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:34.026 [2024-07-14 22:23:32.982702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.026 [2024-07-14 22:23:32.983212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.026 [2024-07-14 22:23:32.983247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.026 [2024-07-14 22:23:32.983347] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:34.026 [2024-07-14 22:23:32.983374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.026 [2024-07-14 22:23:32.983509] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:34.026 [2024-07-14 22:23:32.983522] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.026 [2024-07-14 22:23:32.983638] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:34.026 [2024-07-14 22:23:32.983971] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:34.026 [2024-07-14 22:23:32.983993] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:34.026 [2024-07-14 22:23:32.984123] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.026 pt2 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.026 22:23:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.285 22:23:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.285 "name": "raid_bdev1", 00:15:34.285 "uuid": "d459c763-7a02-4d9e-b419-ea301a81736a", 00:15:34.285 "strip_size_kb": 64, 00:15:34.285 "state": "online", 00:15:34.285 "raid_level": "raid0", 00:15:34.285 "superblock": true, 00:15:34.285 "num_base_bdevs": 2, 00:15:34.285 "num_base_bdevs_discovered": 2, 00:15:34.285 "num_base_bdevs_operational": 2, 00:15:34.285 "base_bdevs_list": [ 00:15:34.285 { 00:15:34.285 "name": "pt1", 00:15:34.285 "uuid": "6b54415d-51e4-50b3-b5e7-1cb33b8e2511", 00:15:34.285 "is_configured": true, 00:15:34.285 "data_offset": 2048, 00:15:34.285 "data_size": 63488 00:15:34.285 }, 00:15:34.285 { 00:15:34.285 "name": "pt2", 00:15:34.285 "uuid": "32faea14-18c4-5ac9-980c-c8a664941457", 00:15:34.285 "is_configured": true, 00:15:34.285 "data_offset": 2048, 00:15:34.285 "data_size": 63488 00:15:34.285 } 00:15:34.285 ] 00:15:34.285 }' 00:15:34.285 22:23:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.285 22:23:33 -- common/autotest_common.sh@10 -- # set +x 00:15:35.221 22:23:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:35.221 22:23:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:35.221 [2024-07-14 22:23:34.166995] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.221 22:23:34 -- bdev/bdev_raid.sh@430 -- # '[' d459c763-7a02-4d9e-b419-ea301a81736a '!=' d459c763-7a02-4d9e-b419-ea301a81736a ']' 00:15:35.221 22:23:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:35.221 22:23:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:35.221 22:23:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:35.221 22:23:34 -- bdev/bdev_raid.sh@511 -- # killprocess 115401 00:15:35.221 22:23:34 -- common/autotest_common.sh@926 -- # '[' -z 115401 ']' 00:15:35.221 22:23:34 -- common/autotest_common.sh@930 -- # kill -0 115401 00:15:35.221 22:23:34 -- common/autotest_common.sh@931 -- # uname 00:15:35.221 22:23:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:35.221 22:23:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115401 00:15:35.221 killing process with pid 115401 00:15:35.221 22:23:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:35.221 22:23:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:35.221 22:23:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115401' 00:15:35.221 22:23:34 -- common/autotest_common.sh@945 -- # kill 115401 00:15:35.221 22:23:34 -- common/autotest_common.sh@950 -- # wait 115401 00:15:35.221 [2024-07-14 22:23:34.203448] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.221 [2024-07-14 22:23:34.203534] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.221 [2024-07-14 22:23:34.204033] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.221 [2024-07-14 22:23:34.204057] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:35.480 [2024-07-14 22:23:34.350262] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.415 ************************************ 00:15:36.415 END TEST raid_superblock_test 00:15:36.415 ************************************ 00:15:36.415 22:23:35 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:36.415 00:15:36.415 real 0m8.528s 00:15:36.415 user 0m14.619s 00:15:36.415 sys 0m0.995s 00:15:36.415 22:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.415 22:23:35 -- common/autotest_common.sh@10 -- # set +x 00:15:36.415 22:23:35 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:36.415 22:23:35 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:36.415 22:23:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:36.415 22:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:36.415 22:23:35 -- common/autotest_common.sh@10 -- # set +x 00:15:36.674 ************************************ 00:15:36.674 START TEST raid_state_function_test 00:15:36.674 ************************************ 00:15:36.674 22:23:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=115667 00:15:36.674 Process raid pid: 115667 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115667' 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115667 /var/tmp/spdk-raid.sock 00:15:36.674 22:23:35 -- common/autotest_common.sh@819 -- # '[' -z 115667 ']' 00:15:36.674 22:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.674 22:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:36.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.674 22:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.674 22:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:36.674 22:23:35 -- common/autotest_common.sh@10 -- # set +x 00:15:36.674 22:23:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:36.674 [2024-07-14 22:23:35.545865] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:36.674 [2024-07-14 22:23:35.546232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.674 [2024-07-14 22:23:35.708854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.933 [2024-07-14 22:23:35.930861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.191 [2024-07-14 22:23:36.129816] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.449 22:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:37.449 22:23:36 -- common/autotest_common.sh@852 -- # return 0 00:15:37.449 22:23:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:37.708 [2024-07-14 22:23:36.659681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.708 [2024-07-14 22:23:36.659819] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.708 [2024-07-14 22:23:36.659833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.708 [2024-07-14 22:23:36.659851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.708 22:23:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.967 22:23:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.967 "name": "Existed_Raid", 00:15:37.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.967 "strip_size_kb": 64, 00:15:37.967 "state": "configuring", 00:15:37.967 "raid_level": "concat", 00:15:37.967 "superblock": false, 00:15:37.967 "num_base_bdevs": 2, 00:15:37.967 "num_base_bdevs_discovered": 0, 00:15:37.967 "num_base_bdevs_operational": 2, 00:15:37.967 "base_bdevs_list": [ 00:15:37.967 { 00:15:37.967 "name": "BaseBdev1", 00:15:37.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.967 "is_configured": false, 00:15:37.967 "data_offset": 0, 00:15:37.967 "data_size": 0 00:15:37.967 }, 00:15:37.967 { 00:15:37.967 "name": "BaseBdev2", 00:15:37.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.967 "is_configured": false, 00:15:37.967 "data_offset": 0, 00:15:37.967 "data_size": 0 00:15:37.967 } 00:15:37.967 ] 00:15:37.967 }' 00:15:37.967 22:23:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.967 22:23:36 -- common/autotest_common.sh@10 -- # set +x 00:15:38.532 22:23:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:38.790 [2024-07-14 22:23:37.779657] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.790 [2024-07-14 22:23:37.779704] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:38.790 22:23:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:39.048 [2024-07-14 22:23:37.975699] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.048 [2024-07-14 22:23:37.975811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.048 [2024-07-14 22:23:37.975823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.048 [2024-07-14 22:23:37.975855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.048 22:23:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.306 [2024-07-14 22:23:38.246009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.306 BaseBdev1 00:15:39.306 22:23:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:39.306 22:23:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:39.306 22:23:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:39.306 22:23:38 -- common/autotest_common.sh@889 -- # local i 00:15:39.306 22:23:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:39.306 22:23:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:39.306 22:23:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.564 22:23:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.564 [ 00:15:39.564 { 00:15:39.564 "name": "BaseBdev1", 00:15:39.564 "aliases": [ 00:15:39.564 "a897e613-ed67-42d9-93ec-f0675a6b255a" 00:15:39.564 ], 00:15:39.564 "product_name": "Malloc disk", 00:15:39.564 "block_size": 512, 00:15:39.564 "num_blocks": 65536, 00:15:39.564 "uuid": "a897e613-ed67-42d9-93ec-f0675a6b255a", 00:15:39.564 "assigned_rate_limits": { 00:15:39.564 "rw_ios_per_sec": 0, 00:15:39.564 "rw_mbytes_per_sec": 0, 00:15:39.564 "r_mbytes_per_sec": 0, 00:15:39.564 "w_mbytes_per_sec": 0 00:15:39.564 }, 00:15:39.564 "claimed": true, 00:15:39.564 "claim_type": "exclusive_write", 00:15:39.564 "zoned": false, 00:15:39.564 "supported_io_types": { 00:15:39.564 "read": true, 00:15:39.564 "write": true, 00:15:39.564 "unmap": true, 00:15:39.564 "write_zeroes": true, 00:15:39.564 "flush": true, 00:15:39.564 "reset": true, 00:15:39.564 "compare": false, 00:15:39.564 "compare_and_write": false, 00:15:39.564 "abort": true, 00:15:39.564 "nvme_admin": false, 00:15:39.564 "nvme_io": false 00:15:39.564 }, 00:15:39.564 "memory_domains": [ 00:15:39.564 { 00:15:39.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.564 "dma_device_type": 2 00:15:39.564 } 00:15:39.564 ], 00:15:39.564 "driver_specific": {} 00:15:39.564 } 00:15:39.564 ] 00:15:39.564 22:23:38 -- common/autotest_common.sh@895 -- # return 0 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.564 22:23:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.822 22:23:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.822 22:23:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.822 22:23:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.822 "name": "Existed_Raid", 00:15:39.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.822 "strip_size_kb": 64, 00:15:39.822 "state": "configuring", 00:15:39.822 "raid_level": "concat", 00:15:39.822 "superblock": false, 00:15:39.822 "num_base_bdevs": 2, 00:15:39.822 "num_base_bdevs_discovered": 1, 00:15:39.822 "num_base_bdevs_operational": 2, 00:15:39.822 "base_bdevs_list": [ 00:15:39.822 { 00:15:39.822 "name": "BaseBdev1", 00:15:39.822 "uuid": "a897e613-ed67-42d9-93ec-f0675a6b255a", 00:15:39.822 "is_configured": true, 00:15:39.822 "data_offset": 0, 00:15:39.822 "data_size": 65536 00:15:39.822 }, 00:15:39.822 { 00:15:39.822 "name": "BaseBdev2", 00:15:39.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.822 "is_configured": false, 00:15:39.822 "data_offset": 0, 00:15:39.822 "data_size": 0 00:15:39.822 } 00:15:39.822 ] 00:15:39.822 }' 00:15:39.822 22:23:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.822 22:23:38 -- common/autotest_common.sh@10 -- # set +x 00:15:40.757 22:23:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.757 [2024-07-14 22:23:39.706376] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.758 [2024-07-14 22:23:39.706435] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:40.758 22:23:39 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:40.758 22:23:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:41.016 [2024-07-14 22:23:39.890440] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.016 [2024-07-14 22:23:39.892407] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.016 [2024-07-14 22:23:39.892465] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.016 22:23:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.275 22:23:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.275 "name": "Existed_Raid", 00:15:41.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.275 "strip_size_kb": 64, 00:15:41.275 "state": "configuring", 00:15:41.275 "raid_level": "concat", 00:15:41.275 "superblock": false, 00:15:41.275 "num_base_bdevs": 2, 00:15:41.275 "num_base_bdevs_discovered": 1, 00:15:41.275 "num_base_bdevs_operational": 2, 00:15:41.275 "base_bdevs_list": [ 00:15:41.275 { 00:15:41.275 "name": "BaseBdev1", 00:15:41.275 "uuid": "a897e613-ed67-42d9-93ec-f0675a6b255a", 00:15:41.275 "is_configured": true, 00:15:41.275 "data_offset": 0, 00:15:41.275 "data_size": 65536 00:15:41.275 }, 00:15:41.275 { 00:15:41.275 "name": "BaseBdev2", 00:15:41.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.275 "is_configured": false, 00:15:41.275 "data_offset": 0, 00:15:41.275 "data_size": 0 00:15:41.275 } 00:15:41.275 ] 00:15:41.275 }' 00:15:41.275 22:23:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.275 22:23:40 -- common/autotest_common.sh@10 -- # set +x 00:15:41.841 22:23:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:42.100 [2024-07-14 22:23:40.987725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.100 [2024-07-14 22:23:40.987789] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:42.100 [2024-07-14 22:23:40.987809] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:42.100 [2024-07-14 22:23:40.987939] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:42.100 [2024-07-14 22:23:40.988288] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:42.100 [2024-07-14 22:23:40.988309] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:42.100 BaseBdev2 00:15:42.100 [2024-07-14 22:23:40.988605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.100 22:23:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:42.100 22:23:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:42.100 22:23:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:42.100 22:23:40 -- common/autotest_common.sh@889 -- # local i 00:15:42.100 22:23:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:42.100 22:23:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:42.100 22:23:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:42.359 22:23:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.359 [ 00:15:42.359 { 00:15:42.359 "name": "BaseBdev2", 00:15:42.359 "aliases": [ 00:15:42.359 "9487c506-d9d4-4d92-a206-c5eb99c321bc" 00:15:42.359 ], 00:15:42.359 "product_name": "Malloc disk", 00:15:42.359 "block_size": 512, 00:15:42.359 "num_blocks": 65536, 00:15:42.359 "uuid": "9487c506-d9d4-4d92-a206-c5eb99c321bc", 00:15:42.359 "assigned_rate_limits": { 00:15:42.359 "rw_ios_per_sec": 0, 00:15:42.359 "rw_mbytes_per_sec": 0, 00:15:42.359 "r_mbytes_per_sec": 0, 00:15:42.359 "w_mbytes_per_sec": 0 00:15:42.359 }, 00:15:42.359 "claimed": true, 00:15:42.359 "claim_type": "exclusive_write", 00:15:42.359 "zoned": false, 00:15:42.359 "supported_io_types": { 00:15:42.359 "read": true, 00:15:42.359 "write": true, 00:15:42.359 "unmap": true, 00:15:42.359 "write_zeroes": true, 00:15:42.359 "flush": true, 00:15:42.359 "reset": true, 00:15:42.359 "compare": false, 00:15:42.359 "compare_and_write": false, 00:15:42.359 "abort": true, 00:15:42.359 "nvme_admin": false, 00:15:42.359 "nvme_io": false 00:15:42.359 }, 00:15:42.359 "memory_domains": [ 00:15:42.359 { 00:15:42.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.359 "dma_device_type": 2 00:15:42.359 } 00:15:42.359 ], 00:15:42.359 "driver_specific": {} 00:15:42.359 } 00:15:42.359 ] 00:15:42.359 22:23:41 -- common/autotest_common.sh@895 -- # return 0 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.359 22:23:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.617 22:23:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.617 "name": "Existed_Raid", 00:15:42.617 "uuid": "ffce2760-b751-47d6-a256-57ddbd633b87", 00:15:42.617 "strip_size_kb": 64, 00:15:42.617 "state": "online", 00:15:42.617 "raid_level": "concat", 00:15:42.617 "superblock": false, 00:15:42.617 "num_base_bdevs": 2, 00:15:42.617 "num_base_bdevs_discovered": 2, 00:15:42.617 "num_base_bdevs_operational": 2, 00:15:42.617 "base_bdevs_list": [ 00:15:42.617 { 00:15:42.617 "name": "BaseBdev1", 00:15:42.617 "uuid": "a897e613-ed67-42d9-93ec-f0675a6b255a", 00:15:42.617 "is_configured": true, 00:15:42.617 "data_offset": 0, 00:15:42.617 "data_size": 65536 00:15:42.617 }, 00:15:42.617 { 00:15:42.617 "name": "BaseBdev2", 00:15:42.617 "uuid": "9487c506-d9d4-4d92-a206-c5eb99c321bc", 00:15:42.617 "is_configured": true, 00:15:42.617 "data_offset": 0, 00:15:42.617 "data_size": 65536 00:15:42.617 } 00:15:42.617 ] 00:15:42.617 }' 00:15:42.617 22:23:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.617 22:23:41 -- common/autotest_common.sh@10 -- # set +x 00:15:43.552 22:23:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:43.552 [2024-07-14 22:23:42.588261] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.552 [2024-07-14 22:23:42.588296] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.552 [2024-07-14 22:23:42.588375] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.826 22:23:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.122 22:23:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.122 "name": "Existed_Raid", 00:15:44.122 "uuid": "ffce2760-b751-47d6-a256-57ddbd633b87", 00:15:44.122 "strip_size_kb": 64, 00:15:44.122 "state": "offline", 00:15:44.122 "raid_level": "concat", 00:15:44.122 "superblock": false, 00:15:44.122 "num_base_bdevs": 2, 00:15:44.122 "num_base_bdevs_discovered": 1, 00:15:44.122 "num_base_bdevs_operational": 1, 00:15:44.122 "base_bdevs_list": [ 00:15:44.122 { 00:15:44.122 "name": null, 00:15:44.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.122 "is_configured": false, 00:15:44.122 "data_offset": 0, 00:15:44.122 "data_size": 65536 00:15:44.122 }, 00:15:44.122 { 00:15:44.122 "name": "BaseBdev2", 00:15:44.122 "uuid": "9487c506-d9d4-4d92-a206-c5eb99c321bc", 00:15:44.122 "is_configured": true, 00:15:44.122 "data_offset": 0, 00:15:44.122 "data_size": 65536 00:15:44.122 } 00:15:44.122 ] 00:15:44.122 }' 00:15:44.122 22:23:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.122 22:23:42 -- common/autotest_common.sh@10 -- # set +x 00:15:44.708 22:23:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:44.708 22:23:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:44.708 22:23:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.708 22:23:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:44.965 22:23:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:44.965 22:23:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:44.965 22:23:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:45.224 [2024-07-14 22:23:44.173060] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.224 [2024-07-14 22:23:44.173177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:45.224 22:23:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:45.224 22:23:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:45.224 22:23:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.224 22:23:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:45.483 22:23:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:45.483 22:23:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:45.483 22:23:44 -- bdev/bdev_raid.sh@287 -- # killprocess 115667 00:15:45.483 22:23:44 -- common/autotest_common.sh@926 -- # '[' -z 115667 ']' 00:15:45.483 22:23:44 -- common/autotest_common.sh@930 -- # kill -0 115667 00:15:45.483 22:23:44 -- common/autotest_common.sh@931 -- # uname 00:15:45.483 22:23:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:45.483 22:23:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115667 00:15:45.483 22:23:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:45.483 22:23:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:45.483 22:23:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115667' 00:15:45.483 killing process with pid 115667 00:15:45.483 22:23:44 -- common/autotest_common.sh@945 -- # kill 115667 00:15:45.483 22:23:44 -- common/autotest_common.sh@950 -- # wait 115667 00:15:45.483 [2024-07-14 22:23:44.460862] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.483 [2024-07-14 22:23:44.461029] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:46.859 00:15:46.859 real 0m10.045s 00:15:46.859 user 0m17.600s 00:15:46.859 sys 0m1.119s 00:15:46.859 22:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.859 22:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.859 ************************************ 00:15:46.859 END TEST raid_state_function_test 00:15:46.859 ************************************ 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:46.859 22:23:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:46.859 22:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.859 22:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.859 ************************************ 00:15:46.859 START TEST raid_state_function_test_sb 00:15:46.859 ************************************ 00:15:46.859 22:23:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=116012 00:15:46.859 Process raid pid: 116012 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116012' 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116012 /var/tmp/spdk-raid.sock 00:15:46.859 22:23:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:46.859 22:23:45 -- common/autotest_common.sh@819 -- # '[' -z 116012 ']' 00:15:46.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:46.859 22:23:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:46.859 22:23:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:46.859 22:23:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:46.859 22:23:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:46.859 22:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.859 [2024-07-14 22:23:45.657273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:46.859 [2024-07-14 22:23:45.658057] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.859 [2024-07-14 22:23:45.827052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.117 [2024-07-14 22:23:46.017431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.375 [2024-07-14 22:23:46.210373] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.634 22:23:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.634 22:23:46 -- common/autotest_common.sh@852 -- # return 0 00:15:47.634 22:23:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:47.892 [2024-07-14 22:23:46.798373] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.892 [2024-07-14 22:23:46.798472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.892 [2024-07-14 22:23:46.798500] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.892 [2024-07-14 22:23:46.798519] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.892 22:23:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.150 22:23:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.150 "name": "Existed_Raid", 00:15:48.150 "uuid": "13307398-957c-4d98-a4d6-22ab02a1b0d9", 00:15:48.150 "strip_size_kb": 64, 00:15:48.150 "state": "configuring", 00:15:48.150 "raid_level": "concat", 00:15:48.150 "superblock": true, 00:15:48.150 "num_base_bdevs": 2, 00:15:48.150 "num_base_bdevs_discovered": 0, 00:15:48.150 "num_base_bdevs_operational": 2, 00:15:48.150 "base_bdevs_list": [ 00:15:48.150 { 00:15:48.150 "name": "BaseBdev1", 00:15:48.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.150 "is_configured": false, 00:15:48.150 "data_offset": 0, 00:15:48.150 "data_size": 0 00:15:48.150 }, 00:15:48.150 { 00:15:48.150 "name": "BaseBdev2", 00:15:48.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.150 "is_configured": false, 00:15:48.150 "data_offset": 0, 00:15:48.150 "data_size": 0 00:15:48.150 } 00:15:48.150 ] 00:15:48.150 }' 00:15:48.150 22:23:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.150 22:23:47 -- common/autotest_common.sh@10 -- # set +x 00:15:48.715 22:23:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.973 [2024-07-14 22:23:47.890427] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.973 [2024-07-14 22:23:47.890466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:48.973 22:23:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:49.231 [2024-07-14 22:23:48.134506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.231 [2024-07-14 22:23:48.134580] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.231 [2024-07-14 22:23:48.134607] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.231 [2024-07-14 22:23:48.134630] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.231 22:23:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.489 [2024-07-14 22:23:48.362163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.489 BaseBdev1 00:15:49.489 22:23:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:49.489 22:23:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:49.489 22:23:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:49.489 22:23:48 -- common/autotest_common.sh@889 -- # local i 00:15:49.489 22:23:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:49.489 22:23:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:49.489 22:23:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.747 22:23:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.747 [ 00:15:49.747 { 00:15:49.747 "name": "BaseBdev1", 00:15:49.747 "aliases": [ 00:15:49.747 "2fa212fb-976d-4a70-a8d5-21e27d65ae1c" 00:15:49.747 ], 00:15:49.747 "product_name": "Malloc disk", 00:15:49.747 "block_size": 512, 00:15:49.747 "num_blocks": 65536, 00:15:49.747 "uuid": "2fa212fb-976d-4a70-a8d5-21e27d65ae1c", 00:15:49.747 "assigned_rate_limits": { 00:15:49.747 "rw_ios_per_sec": 0, 00:15:49.747 "rw_mbytes_per_sec": 0, 00:15:49.747 "r_mbytes_per_sec": 0, 00:15:49.747 "w_mbytes_per_sec": 0 00:15:49.747 }, 00:15:49.747 "claimed": true, 00:15:49.747 "claim_type": "exclusive_write", 00:15:49.747 "zoned": false, 00:15:49.747 "supported_io_types": { 00:15:49.747 "read": true, 00:15:49.747 "write": true, 00:15:49.747 "unmap": true, 00:15:49.747 "write_zeroes": true, 00:15:49.747 "flush": true, 00:15:49.747 "reset": true, 00:15:49.747 "compare": false, 00:15:49.747 "compare_and_write": false, 00:15:49.747 "abort": true, 00:15:49.747 "nvme_admin": false, 00:15:49.747 "nvme_io": false 00:15:49.747 }, 00:15:49.747 "memory_domains": [ 00:15:49.747 { 00:15:49.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.747 "dma_device_type": 2 00:15:49.747 } 00:15:49.747 ], 00:15:49.747 "driver_specific": {} 00:15:49.747 } 00:15:49.747 ] 00:15:49.747 22:23:48 -- common/autotest_common.sh@895 -- # return 0 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.747 22:23:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.005 22:23:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.005 "name": "Existed_Raid", 00:15:50.005 "uuid": "1e818ec2-84d1-4879-bd7f-df958f15711f", 00:15:50.005 "strip_size_kb": 64, 00:15:50.005 "state": "configuring", 00:15:50.005 "raid_level": "concat", 00:15:50.005 "superblock": true, 00:15:50.005 "num_base_bdevs": 2, 00:15:50.005 "num_base_bdevs_discovered": 1, 00:15:50.005 "num_base_bdevs_operational": 2, 00:15:50.005 "base_bdevs_list": [ 00:15:50.005 { 00:15:50.005 "name": "BaseBdev1", 00:15:50.005 "uuid": "2fa212fb-976d-4a70-a8d5-21e27d65ae1c", 00:15:50.005 "is_configured": true, 00:15:50.005 "data_offset": 2048, 00:15:50.005 "data_size": 63488 00:15:50.005 }, 00:15:50.005 { 00:15:50.005 "name": "BaseBdev2", 00:15:50.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.005 "is_configured": false, 00:15:50.005 "data_offset": 0, 00:15:50.005 "data_size": 0 00:15:50.005 } 00:15:50.005 ] 00:15:50.005 }' 00:15:50.005 22:23:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.005 22:23:49 -- common/autotest_common.sh@10 -- # set +x 00:15:50.939 22:23:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.939 [2024-07-14 22:23:49.822500] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.939 [2024-07-14 22:23:49.822572] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:50.939 22:23:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:50.939 22:23:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:51.196 22:23:50 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:51.454 BaseBdev1 00:15:51.454 22:23:50 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:51.454 22:23:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:51.454 22:23:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:51.454 22:23:50 -- common/autotest_common.sh@889 -- # local i 00:15:51.454 22:23:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:51.454 22:23:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:51.454 22:23:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.454 22:23:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.712 [ 00:15:51.712 { 00:15:51.712 "name": "BaseBdev1", 00:15:51.712 "aliases": [ 00:15:51.712 "8210bf6e-fc21-44a3-bf67-76501d6b8285" 00:15:51.712 ], 00:15:51.712 "product_name": "Malloc disk", 00:15:51.712 "block_size": 512, 00:15:51.712 "num_blocks": 65536, 00:15:51.712 "uuid": "8210bf6e-fc21-44a3-bf67-76501d6b8285", 00:15:51.712 "assigned_rate_limits": { 00:15:51.712 "rw_ios_per_sec": 0, 00:15:51.712 "rw_mbytes_per_sec": 0, 00:15:51.712 "r_mbytes_per_sec": 0, 00:15:51.712 "w_mbytes_per_sec": 0 00:15:51.712 }, 00:15:51.712 "claimed": false, 00:15:51.712 "zoned": false, 00:15:51.712 "supported_io_types": { 00:15:51.712 "read": true, 00:15:51.712 "write": true, 00:15:51.712 "unmap": true, 00:15:51.712 "write_zeroes": true, 00:15:51.712 "flush": true, 00:15:51.712 "reset": true, 00:15:51.712 "compare": false, 00:15:51.712 "compare_and_write": false, 00:15:51.712 "abort": true, 00:15:51.712 "nvme_admin": false, 00:15:51.712 "nvme_io": false 00:15:51.712 }, 00:15:51.712 "memory_domains": [ 00:15:51.712 { 00:15:51.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.712 "dma_device_type": 2 00:15:51.712 } 00:15:51.712 ], 00:15:51.712 "driver_specific": {} 00:15:51.712 } 00:15:51.712 ] 00:15:51.712 22:23:50 -- common/autotest_common.sh@895 -- # return 0 00:15:51.712 22:23:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:51.971 [2024-07-14 22:23:50.895232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.971 [2024-07-14 22:23:50.897440] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.971 [2024-07-14 22:23:50.897525] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.971 22:23:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.228 22:23:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.228 "name": "Existed_Raid", 00:15:52.228 "uuid": "61abe565-7db8-4bfe-8761-7cb672223499", 00:15:52.228 "strip_size_kb": 64, 00:15:52.228 "state": "configuring", 00:15:52.228 "raid_level": "concat", 00:15:52.228 "superblock": true, 00:15:52.228 "num_base_bdevs": 2, 00:15:52.228 "num_base_bdevs_discovered": 1, 00:15:52.228 "num_base_bdevs_operational": 2, 00:15:52.228 "base_bdevs_list": [ 00:15:52.228 { 00:15:52.228 "name": "BaseBdev1", 00:15:52.228 "uuid": "8210bf6e-fc21-44a3-bf67-76501d6b8285", 00:15:52.228 "is_configured": true, 00:15:52.228 "data_offset": 2048, 00:15:52.228 "data_size": 63488 00:15:52.228 }, 00:15:52.228 { 00:15:52.228 "name": "BaseBdev2", 00:15:52.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.228 "is_configured": false, 00:15:52.228 "data_offset": 0, 00:15:52.228 "data_size": 0 00:15:52.228 } 00:15:52.228 ] 00:15:52.228 }' 00:15:52.228 22:23:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.228 22:23:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.793 22:23:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.051 [2024-07-14 22:23:51.960368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.051 [2024-07-14 22:23:51.960598] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:53.051 [2024-07-14 22:23:51.960613] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:53.051 BaseBdev2 00:15:53.051 [2024-07-14 22:23:51.960797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:53.051 [2024-07-14 22:23:51.961202] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:53.051 [2024-07-14 22:23:51.961223] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:53.051 [2024-07-14 22:23:51.961371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.051 22:23:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:53.051 22:23:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:53.051 22:23:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:53.051 22:23:51 -- common/autotest_common.sh@889 -- # local i 00:15:53.051 22:23:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:53.051 22:23:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:53.051 22:23:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.310 22:23:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.310 [ 00:15:53.310 { 00:15:53.310 "name": "BaseBdev2", 00:15:53.310 "aliases": [ 00:15:53.310 "56d6fc3f-0d3e-422f-9ed7-fb404e83feb7" 00:15:53.310 ], 00:15:53.310 "product_name": "Malloc disk", 00:15:53.310 "block_size": 512, 00:15:53.310 "num_blocks": 65536, 00:15:53.310 "uuid": "56d6fc3f-0d3e-422f-9ed7-fb404e83feb7", 00:15:53.310 "assigned_rate_limits": { 00:15:53.310 "rw_ios_per_sec": 0, 00:15:53.310 "rw_mbytes_per_sec": 0, 00:15:53.310 "r_mbytes_per_sec": 0, 00:15:53.310 "w_mbytes_per_sec": 0 00:15:53.310 }, 00:15:53.310 "claimed": true, 00:15:53.310 "claim_type": "exclusive_write", 00:15:53.310 "zoned": false, 00:15:53.310 "supported_io_types": { 00:15:53.310 "read": true, 00:15:53.310 "write": true, 00:15:53.310 "unmap": true, 00:15:53.310 "write_zeroes": true, 00:15:53.310 "flush": true, 00:15:53.310 "reset": true, 00:15:53.310 "compare": false, 00:15:53.310 "compare_and_write": false, 00:15:53.310 "abort": true, 00:15:53.310 "nvme_admin": false, 00:15:53.310 "nvme_io": false 00:15:53.310 }, 00:15:53.310 "memory_domains": [ 00:15:53.310 { 00:15:53.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.310 "dma_device_type": 2 00:15:53.310 } 00:15:53.310 ], 00:15:53.310 "driver_specific": {} 00:15:53.310 } 00:15:53.310 ] 00:15:53.310 22:23:52 -- common/autotest_common.sh@895 -- # return 0 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.310 22:23:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.569 22:23:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.569 "name": "Existed_Raid", 00:15:53.569 "uuid": "61abe565-7db8-4bfe-8761-7cb672223499", 00:15:53.569 "strip_size_kb": 64, 00:15:53.569 "state": "online", 00:15:53.569 "raid_level": "concat", 00:15:53.569 "superblock": true, 00:15:53.569 "num_base_bdevs": 2, 00:15:53.569 "num_base_bdevs_discovered": 2, 00:15:53.569 "num_base_bdevs_operational": 2, 00:15:53.569 "base_bdevs_list": [ 00:15:53.569 { 00:15:53.569 "name": "BaseBdev1", 00:15:53.569 "uuid": "8210bf6e-fc21-44a3-bf67-76501d6b8285", 00:15:53.569 "is_configured": true, 00:15:53.569 "data_offset": 2048, 00:15:53.569 "data_size": 63488 00:15:53.569 }, 00:15:53.569 { 00:15:53.569 "name": "BaseBdev2", 00:15:53.569 "uuid": "56d6fc3f-0d3e-422f-9ed7-fb404e83feb7", 00:15:53.569 "is_configured": true, 00:15:53.569 "data_offset": 2048, 00:15:53.569 "data_size": 63488 00:15:53.569 } 00:15:53.569 ] 00:15:53.569 }' 00:15:53.569 22:23:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.569 22:23:52 -- common/autotest_common.sh@10 -- # set +x 00:15:54.136 22:23:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:54.394 [2024-07-14 22:23:53.380808] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.394 [2024-07-14 22:23:53.380845] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.394 [2024-07-14 22:23:53.380949] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.652 22:23:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.911 22:23:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.911 "name": "Existed_Raid", 00:15:54.911 "uuid": "61abe565-7db8-4bfe-8761-7cb672223499", 00:15:54.911 "strip_size_kb": 64, 00:15:54.911 "state": "offline", 00:15:54.911 "raid_level": "concat", 00:15:54.911 "superblock": true, 00:15:54.911 "num_base_bdevs": 2, 00:15:54.911 "num_base_bdevs_discovered": 1, 00:15:54.911 "num_base_bdevs_operational": 1, 00:15:54.911 "base_bdevs_list": [ 00:15:54.911 { 00:15:54.911 "name": null, 00:15:54.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.911 "is_configured": false, 00:15:54.911 "data_offset": 2048, 00:15:54.911 "data_size": 63488 00:15:54.911 }, 00:15:54.911 { 00:15:54.911 "name": "BaseBdev2", 00:15:54.911 "uuid": "56d6fc3f-0d3e-422f-9ed7-fb404e83feb7", 00:15:54.911 "is_configured": true, 00:15:54.911 "data_offset": 2048, 00:15:54.911 "data_size": 63488 00:15:54.911 } 00:15:54.911 ] 00:15:54.911 }' 00:15:54.911 22:23:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.911 22:23:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.476 22:23:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:55.476 22:23:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.476 22:23:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.476 22:23:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:55.733 22:23:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:55.733 22:23:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.733 22:23:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:55.991 [2024-07-14 22:23:54.851605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.991 [2024-07-14 22:23:54.851677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:55.991 22:23:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.991 22:23:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.991 22:23:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.991 22:23:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.250 22:23:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:56.250 22:23:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:56.250 22:23:55 -- bdev/bdev_raid.sh@287 -- # killprocess 116012 00:15:56.250 22:23:55 -- common/autotest_common.sh@926 -- # '[' -z 116012 ']' 00:15:56.250 22:23:55 -- common/autotest_common.sh@930 -- # kill -0 116012 00:15:56.250 22:23:55 -- common/autotest_common.sh@931 -- # uname 00:15:56.250 22:23:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:56.250 22:23:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116012 00:15:56.250 killing process with pid 116012 00:15:56.250 22:23:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:56.250 22:23:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:56.250 22:23:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116012' 00:15:56.250 22:23:55 -- common/autotest_common.sh@945 -- # kill 116012 00:15:56.250 22:23:55 -- common/autotest_common.sh@950 -- # wait 116012 00:15:56.250 [2024-07-14 22:23:55.143678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.250 [2024-07-14 22:23:55.143859] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.194 ************************************ 00:15:57.194 END TEST raid_state_function_test_sb 00:15:57.194 ************************************ 00:15:57.194 22:23:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:57.194 00:15:57.194 real 0m10.631s 00:15:57.194 user 0m18.418s 00:15:57.194 sys 0m1.337s 00:15:57.194 22:23:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.194 22:23:56 -- common/autotest_common.sh@10 -- # set +x 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:57.470 22:23:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:57.470 22:23:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.470 22:23:56 -- common/autotest_common.sh@10 -- # set +x 00:15:57.470 ************************************ 00:15:57.470 START TEST raid_superblock_test 00:15:57.470 ************************************ 00:15:57.470 22:23:56 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@357 -- # raid_pid=116356 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116356 /var/tmp/spdk-raid.sock 00:15:57.470 22:23:56 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:57.470 22:23:56 -- common/autotest_common.sh@819 -- # '[' -z 116356 ']' 00:15:57.470 22:23:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.470 22:23:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:57.470 22:23:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.470 22:23:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:57.470 22:23:56 -- common/autotest_common.sh@10 -- # set +x 00:15:57.470 [2024-07-14 22:23:56.325481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:57.470 [2024-07-14 22:23:56.325659] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116356 ] 00:15:57.470 [2024-07-14 22:23:56.475175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.729 [2024-07-14 22:23:56.683327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.988 [2024-07-14 22:23:56.878928] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.247 22:23:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.247 22:23:57 -- common/autotest_common.sh@852 -- # return 0 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.247 22:23:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:58.506 malloc1 00:15:58.506 22:23:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.764 [2024-07-14 22:23:57.693553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.764 [2024-07-14 22:23:57.693667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.764 [2024-07-14 22:23:57.693699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:58.764 [2024-07-14 22:23:57.693748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.764 [2024-07-14 22:23:57.696497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.764 [2024-07-14 22:23:57.696563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.764 pt1 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.764 22:23:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:59.023 malloc2 00:15:59.023 22:23:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.281 [2024-07-14 22:23:58.120523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.281 [2024-07-14 22:23:58.120623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.281 [2024-07-14 22:23:58.120665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:59.281 [2024-07-14 22:23:58.120719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.281 [2024-07-14 22:23:58.123264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.282 [2024-07-14 22:23:58.123328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.282 pt2 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:59.282 [2024-07-14 22:23:58.304659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.282 [2024-07-14 22:23:58.306995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.282 [2024-07-14 22:23:58.307203] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:59.282 [2024-07-14 22:23:58.307218] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.282 [2024-07-14 22:23:58.307358] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:59.282 [2024-07-14 22:23:58.307730] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:59.282 [2024-07-14 22:23:58.307744] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:59.282 [2024-07-14 22:23:58.307903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.282 22:23:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.540 22:23:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.540 "name": "raid_bdev1", 00:15:59.540 "uuid": "852af351-4b31-4a45-90e4-a70c4fa89915", 00:15:59.540 "strip_size_kb": 64, 00:15:59.540 "state": "online", 00:15:59.540 "raid_level": "concat", 00:15:59.540 "superblock": true, 00:15:59.540 "num_base_bdevs": 2, 00:15:59.540 "num_base_bdevs_discovered": 2, 00:15:59.540 "num_base_bdevs_operational": 2, 00:15:59.540 "base_bdevs_list": [ 00:15:59.540 { 00:15:59.540 "name": "pt1", 00:15:59.540 "uuid": "97abb365-c738-54f0-898f-77e6003657bd", 00:15:59.540 "is_configured": true, 00:15:59.540 "data_offset": 2048, 00:15:59.540 "data_size": 63488 00:15:59.540 }, 00:15:59.540 { 00:15:59.540 "name": "pt2", 00:15:59.540 "uuid": "c0cf5bf4-9288-5def-9f8e-ddbcdc5c9853", 00:15:59.540 "is_configured": true, 00:15:59.540 "data_offset": 2048, 00:15:59.540 "data_size": 63488 00:15:59.540 } 00:15:59.540 ] 00:15:59.540 }' 00:15:59.540 22:23:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.540 22:23:58 -- common/autotest_common.sh@10 -- # set +x 00:16:00.476 22:23:59 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:00.476 22:23:59 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:00.476 [2024-07-14 22:23:59.381184] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.476 22:23:59 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=852af351-4b31-4a45-90e4-a70c4fa89915 00:16:00.476 22:23:59 -- bdev/bdev_raid.sh@380 -- # '[' -z 852af351-4b31-4a45-90e4-a70c4fa89915 ']' 00:16:00.476 22:23:59 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:00.736 [2024-07-14 22:23:59.616964] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.736 [2024-07-14 22:23:59.617013] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.736 [2024-07-14 22:23:59.617131] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.736 [2024-07-14 22:23:59.617247] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.736 [2024-07-14 22:23:59.617259] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:00.736 22:23:59 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.736 22:23:59 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:00.994 22:23:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:00.994 22:23:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:00.994 22:23:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.994 22:23:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:01.251 22:24:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.251 22:24:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:01.251 22:24:00 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:01.251 22:24:00 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:01.510 22:24:00 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:01.510 22:24:00 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:01.510 22:24:00 -- common/autotest_common.sh@640 -- # local es=0 00:16:01.510 22:24:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:01.510 22:24:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.510 22:24:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:01.510 22:24:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.510 22:24:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:01.510 22:24:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.510 22:24:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:01.510 22:24:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.510 22:24:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:01.510 22:24:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:01.768 [2024-07-14 22:24:00.689143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:01.768 [2024-07-14 22:24:00.691210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:01.768 [2024-07-14 22:24:00.691289] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:01.768 [2024-07-14 22:24:00.691371] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:01.768 [2024-07-14 22:24:00.691439] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.768 [2024-07-14 22:24:00.691450] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:01.768 request: 00:16:01.768 { 00:16:01.768 "name": "raid_bdev1", 00:16:01.768 "raid_level": "concat", 00:16:01.768 "base_bdevs": [ 00:16:01.768 "malloc1", 00:16:01.768 "malloc2" 00:16:01.768 ], 00:16:01.768 "superblock": false, 00:16:01.768 "strip_size_kb": 64, 00:16:01.768 "method": "bdev_raid_create", 00:16:01.768 "req_id": 1 00:16:01.768 } 00:16:01.768 Got JSON-RPC error response 00:16:01.768 response: 00:16:01.768 { 00:16:01.768 "code": -17, 00:16:01.768 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:01.768 } 00:16:01.768 22:24:00 -- common/autotest_common.sh@643 -- # es=1 00:16:01.768 22:24:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:01.768 22:24:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:01.768 22:24:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:01.768 22:24:00 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.768 22:24:00 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:02.026 22:24:00 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:02.026 22:24:00 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:02.026 22:24:00 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.284 [2024-07-14 22:24:01.109220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.284 [2024-07-14 22:24:01.109360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.284 [2024-07-14 22:24:01.109400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.284 [2024-07-14 22:24:01.109433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.284 [2024-07-14 22:24:01.112038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.284 [2024-07-14 22:24:01.112108] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.284 [2024-07-14 22:24:01.112203] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:02.284 [2024-07-14 22:24:01.112287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.284 pt1 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.284 22:24:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.540 22:24:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.540 "name": "raid_bdev1", 00:16:02.540 "uuid": "852af351-4b31-4a45-90e4-a70c4fa89915", 00:16:02.540 "strip_size_kb": 64, 00:16:02.540 "state": "configuring", 00:16:02.540 "raid_level": "concat", 00:16:02.540 "superblock": true, 00:16:02.540 "num_base_bdevs": 2, 00:16:02.540 "num_base_bdevs_discovered": 1, 00:16:02.540 "num_base_bdevs_operational": 2, 00:16:02.540 "base_bdevs_list": [ 00:16:02.540 { 00:16:02.540 "name": "pt1", 00:16:02.540 "uuid": "97abb365-c738-54f0-898f-77e6003657bd", 00:16:02.540 "is_configured": true, 00:16:02.540 "data_offset": 2048, 00:16:02.540 "data_size": 63488 00:16:02.540 }, 00:16:02.540 { 00:16:02.540 "name": null, 00:16:02.540 "uuid": "c0cf5bf4-9288-5def-9f8e-ddbcdc5c9853", 00:16:02.540 "is_configured": false, 00:16:02.540 "data_offset": 2048, 00:16:02.540 "data_size": 63488 00:16:02.540 } 00:16:02.540 ] 00:16:02.540 }' 00:16:02.540 22:24:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.540 22:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:03.103 22:24:02 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:03.103 22:24:02 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:03.103 22:24:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:03.103 22:24:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.361 [2024-07-14 22:24:02.253550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.361 [2024-07-14 22:24:02.253676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.361 [2024-07-14 22:24:02.253733] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.361 [2024-07-14 22:24:02.253757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.361 [2024-07-14 22:24:02.254299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.361 [2024-07-14 22:24:02.254347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.361 [2024-07-14 22:24:02.254449] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:03.361 [2024-07-14 22:24:02.254475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.361 [2024-07-14 22:24:02.254612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:03.361 [2024-07-14 22:24:02.254624] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.361 [2024-07-14 22:24:02.254741] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:03.361 [2024-07-14 22:24:02.255028] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:03.361 [2024-07-14 22:24:02.255041] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:03.361 [2024-07-14 22:24:02.255215] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.361 pt2 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.361 22:24:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.618 22:24:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.618 "name": "raid_bdev1", 00:16:03.618 "uuid": "852af351-4b31-4a45-90e4-a70c4fa89915", 00:16:03.618 "strip_size_kb": 64, 00:16:03.618 "state": "online", 00:16:03.618 "raid_level": "concat", 00:16:03.618 "superblock": true, 00:16:03.618 "num_base_bdevs": 2, 00:16:03.618 "num_base_bdevs_discovered": 2, 00:16:03.618 "num_base_bdevs_operational": 2, 00:16:03.618 "base_bdevs_list": [ 00:16:03.618 { 00:16:03.618 "name": "pt1", 00:16:03.618 "uuid": "97abb365-c738-54f0-898f-77e6003657bd", 00:16:03.619 "is_configured": true, 00:16:03.619 "data_offset": 2048, 00:16:03.619 "data_size": 63488 00:16:03.619 }, 00:16:03.619 { 00:16:03.619 "name": "pt2", 00:16:03.619 "uuid": "c0cf5bf4-9288-5def-9f8e-ddbcdc5c9853", 00:16:03.619 "is_configured": true, 00:16:03.619 "data_offset": 2048, 00:16:03.619 "data_size": 63488 00:16:03.619 } 00:16:03.619 ] 00:16:03.619 }' 00:16:03.619 22:24:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.619 22:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:04.183 22:24:03 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:04.183 22:24:03 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:04.440 [2024-07-14 22:24:03.406010] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.440 22:24:03 -- bdev/bdev_raid.sh@430 -- # '[' 852af351-4b31-4a45-90e4-a70c4fa89915 '!=' 852af351-4b31-4a45-90e4-a70c4fa89915 ']' 00:16:04.440 22:24:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:04.440 22:24:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:04.440 22:24:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:04.440 22:24:03 -- bdev/bdev_raid.sh@511 -- # killprocess 116356 00:16:04.440 22:24:03 -- common/autotest_common.sh@926 -- # '[' -z 116356 ']' 00:16:04.440 22:24:03 -- common/autotest_common.sh@930 -- # kill -0 116356 00:16:04.440 22:24:03 -- common/autotest_common.sh@931 -- # uname 00:16:04.440 22:24:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.440 22:24:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116356 00:16:04.440 killing process with pid 116356 00:16:04.440 22:24:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.440 22:24:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.440 22:24:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116356' 00:16:04.440 22:24:03 -- common/autotest_common.sh@945 -- # kill 116356 00:16:04.440 22:24:03 -- common/autotest_common.sh@950 -- # wait 116356 00:16:04.440 [2024-07-14 22:24:03.455903] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.440 [2024-07-14 22:24:03.456019] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.440 [2024-07-14 22:24:03.456082] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.440 [2024-07-14 22:24:03.456100] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:04.699 [2024-07-14 22:24:03.600176] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.633 ************************************ 00:16:05.633 END TEST raid_superblock_test 00:16:05.633 ************************************ 00:16:05.633 22:24:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:05.633 00:16:05.633 real 0m8.381s 00:16:05.633 user 0m14.364s 00:16:05.633 sys 0m0.991s 00:16:05.633 22:24:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.633 22:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:05.891 22:24:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:05.891 22:24:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.891 22:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:05.891 ************************************ 00:16:05.891 START TEST raid_state_function_test 00:16:05.891 ************************************ 00:16:05.891 22:24:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=116619 00:16:05.891 Process raid pid: 116619 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116619' 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116619 /var/tmp/spdk-raid.sock 00:16:05.891 22:24:04 -- common/autotest_common.sh@819 -- # '[' -z 116619 ']' 00:16:05.891 22:24:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:05.891 22:24:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:05.891 22:24:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:05.891 22:24:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.891 22:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:05.891 22:24:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:05.891 [2024-07-14 22:24:04.777121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:05.891 [2024-07-14 22:24:04.777525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.891 [2024-07-14 22:24:04.944371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.150 [2024-07-14 22:24:05.140846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.408 [2024-07-14 22:24:05.333749] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.667 22:24:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.667 22:24:05 -- common/autotest_common.sh@852 -- # return 0 00:16:06.667 22:24:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:06.925 [2024-07-14 22:24:05.879490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.925 [2024-07-14 22:24:05.879586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.925 [2024-07-14 22:24:05.879599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.925 [2024-07-14 22:24:05.879617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.925 22:24:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.185 22:24:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.185 "name": "Existed_Raid", 00:16:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.185 "strip_size_kb": 0, 00:16:07.185 "state": "configuring", 00:16:07.185 "raid_level": "raid1", 00:16:07.185 "superblock": false, 00:16:07.185 "num_base_bdevs": 2, 00:16:07.185 "num_base_bdevs_discovered": 0, 00:16:07.185 "num_base_bdevs_operational": 2, 00:16:07.185 "base_bdevs_list": [ 00:16:07.185 { 00:16:07.185 "name": "BaseBdev1", 00:16:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.185 "is_configured": false, 00:16:07.185 "data_offset": 0, 00:16:07.185 "data_size": 0 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": "BaseBdev2", 00:16:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.185 "is_configured": false, 00:16:07.185 "data_offset": 0, 00:16:07.185 "data_size": 0 00:16:07.185 } 00:16:07.185 ] 00:16:07.185 }' 00:16:07.185 22:24:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.185 22:24:06 -- common/autotest_common.sh@10 -- # set +x 00:16:07.751 22:24:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.009 [2024-07-14 22:24:06.995654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.009 [2024-07-14 22:24:06.995870] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:08.009 22:24:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:08.267 [2024-07-14 22:24:07.251682] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.267 [2024-07-14 22:24:07.251946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.267 [2024-07-14 22:24:07.252080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.267 [2024-07-14 22:24:07.252144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.267 22:24:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.525 [2024-07-14 22:24:07.481993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.525 BaseBdev1 00:16:08.525 22:24:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:08.525 22:24:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:08.525 22:24:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:08.525 22:24:07 -- common/autotest_common.sh@889 -- # local i 00:16:08.525 22:24:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:08.525 22:24:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:08.525 22:24:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.788 22:24:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.054 [ 00:16:09.054 { 00:16:09.054 "name": "BaseBdev1", 00:16:09.054 "aliases": [ 00:16:09.054 "ae7d5508-37a4-4f90-9765-b4473b997ea9" 00:16:09.054 ], 00:16:09.054 "product_name": "Malloc disk", 00:16:09.054 "block_size": 512, 00:16:09.054 "num_blocks": 65536, 00:16:09.054 "uuid": "ae7d5508-37a4-4f90-9765-b4473b997ea9", 00:16:09.054 "assigned_rate_limits": { 00:16:09.054 "rw_ios_per_sec": 0, 00:16:09.054 "rw_mbytes_per_sec": 0, 00:16:09.054 "r_mbytes_per_sec": 0, 00:16:09.054 "w_mbytes_per_sec": 0 00:16:09.054 }, 00:16:09.054 "claimed": true, 00:16:09.054 "claim_type": "exclusive_write", 00:16:09.054 "zoned": false, 00:16:09.054 "supported_io_types": { 00:16:09.054 "read": true, 00:16:09.054 "write": true, 00:16:09.054 "unmap": true, 00:16:09.054 "write_zeroes": true, 00:16:09.054 "flush": true, 00:16:09.054 "reset": true, 00:16:09.054 "compare": false, 00:16:09.054 "compare_and_write": false, 00:16:09.054 "abort": true, 00:16:09.054 "nvme_admin": false, 00:16:09.054 "nvme_io": false 00:16:09.054 }, 00:16:09.054 "memory_domains": [ 00:16:09.054 { 00:16:09.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.054 "dma_device_type": 2 00:16:09.054 } 00:16:09.054 ], 00:16:09.054 "driver_specific": {} 00:16:09.054 } 00:16:09.054 ] 00:16:09.054 22:24:07 -- common/autotest_common.sh@895 -- # return 0 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.054 22:24:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.312 22:24:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.312 "name": "Existed_Raid", 00:16:09.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.312 "strip_size_kb": 0, 00:16:09.312 "state": "configuring", 00:16:09.312 "raid_level": "raid1", 00:16:09.312 "superblock": false, 00:16:09.312 "num_base_bdevs": 2, 00:16:09.312 "num_base_bdevs_discovered": 1, 00:16:09.312 "num_base_bdevs_operational": 2, 00:16:09.312 "base_bdevs_list": [ 00:16:09.312 { 00:16:09.312 "name": "BaseBdev1", 00:16:09.312 "uuid": "ae7d5508-37a4-4f90-9765-b4473b997ea9", 00:16:09.312 "is_configured": true, 00:16:09.312 "data_offset": 0, 00:16:09.312 "data_size": 65536 00:16:09.312 }, 00:16:09.312 { 00:16:09.312 "name": "BaseBdev2", 00:16:09.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.312 "is_configured": false, 00:16:09.312 "data_offset": 0, 00:16:09.312 "data_size": 0 00:16:09.312 } 00:16:09.312 ] 00:16:09.312 }' 00:16:09.312 22:24:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.312 22:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:09.879 22:24:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.136 [2024-07-14 22:24:09.026425] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.136 [2024-07-14 22:24:09.026596] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:10.136 22:24:09 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:10.136 22:24:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:10.394 [2024-07-14 22:24:09.290611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.394 [2024-07-14 22:24:09.292525] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.394 [2024-07-14 22:24:09.292724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.394 22:24:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.652 22:24:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.652 "name": "Existed_Raid", 00:16:10.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.652 "strip_size_kb": 0, 00:16:10.652 "state": "configuring", 00:16:10.652 "raid_level": "raid1", 00:16:10.652 "superblock": false, 00:16:10.652 "num_base_bdevs": 2, 00:16:10.652 "num_base_bdevs_discovered": 1, 00:16:10.652 "num_base_bdevs_operational": 2, 00:16:10.652 "base_bdevs_list": [ 00:16:10.652 { 00:16:10.652 "name": "BaseBdev1", 00:16:10.652 "uuid": "ae7d5508-37a4-4f90-9765-b4473b997ea9", 00:16:10.652 "is_configured": true, 00:16:10.652 "data_offset": 0, 00:16:10.652 "data_size": 65536 00:16:10.652 }, 00:16:10.652 { 00:16:10.652 "name": "BaseBdev2", 00:16:10.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.652 "is_configured": false, 00:16:10.652 "data_offset": 0, 00:16:10.652 "data_size": 0 00:16:10.652 } 00:16:10.652 ] 00:16:10.652 }' 00:16:10.652 22:24:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.652 22:24:09 -- common/autotest_common.sh@10 -- # set +x 00:16:11.218 22:24:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.476 [2024-07-14 22:24:10.414817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.476 [2024-07-14 22:24:10.414904] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:11.476 [2024-07-14 22:24:10.414917] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:11.476 [2024-07-14 22:24:10.415042] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:16:11.476 [2024-07-14 22:24:10.415450] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:11.476 [2024-07-14 22:24:10.415474] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:11.476 BaseBdev2 00:16:11.476 [2024-07-14 22:24:10.415749] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.476 22:24:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:11.476 22:24:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:11.476 22:24:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:11.476 22:24:10 -- common/autotest_common.sh@889 -- # local i 00:16:11.476 22:24:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:11.476 22:24:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:11.476 22:24:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.737 22:24:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.995 [ 00:16:11.995 { 00:16:11.995 "name": "BaseBdev2", 00:16:11.995 "aliases": [ 00:16:11.995 "5b0a10d2-9124-4604-9ffa-3ec014149ee9" 00:16:11.995 ], 00:16:11.995 "product_name": "Malloc disk", 00:16:11.995 "block_size": 512, 00:16:11.995 "num_blocks": 65536, 00:16:11.995 "uuid": "5b0a10d2-9124-4604-9ffa-3ec014149ee9", 00:16:11.995 "assigned_rate_limits": { 00:16:11.995 "rw_ios_per_sec": 0, 00:16:11.995 "rw_mbytes_per_sec": 0, 00:16:11.995 "r_mbytes_per_sec": 0, 00:16:11.995 "w_mbytes_per_sec": 0 00:16:11.995 }, 00:16:11.995 "claimed": true, 00:16:11.995 "claim_type": "exclusive_write", 00:16:11.995 "zoned": false, 00:16:11.995 "supported_io_types": { 00:16:11.995 "read": true, 00:16:11.995 "write": true, 00:16:11.995 "unmap": true, 00:16:11.995 "write_zeroes": true, 00:16:11.995 "flush": true, 00:16:11.995 "reset": true, 00:16:11.995 "compare": false, 00:16:11.995 "compare_and_write": false, 00:16:11.995 "abort": true, 00:16:11.995 "nvme_admin": false, 00:16:11.995 "nvme_io": false 00:16:11.995 }, 00:16:11.995 "memory_domains": [ 00:16:11.995 { 00:16:11.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.995 "dma_device_type": 2 00:16:11.995 } 00:16:11.995 ], 00:16:11.995 "driver_specific": {} 00:16:11.995 } 00:16:11.995 ] 00:16:11.995 22:24:10 -- common/autotest_common.sh@895 -- # return 0 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.995 22:24:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.253 22:24:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.253 "name": "Existed_Raid", 00:16:12.253 "uuid": "c3fa4541-cb32-414b-89ac-f873b0defde3", 00:16:12.253 "strip_size_kb": 0, 00:16:12.253 "state": "online", 00:16:12.253 "raid_level": "raid1", 00:16:12.253 "superblock": false, 00:16:12.253 "num_base_bdevs": 2, 00:16:12.253 "num_base_bdevs_discovered": 2, 00:16:12.253 "num_base_bdevs_operational": 2, 00:16:12.253 "base_bdevs_list": [ 00:16:12.253 { 00:16:12.253 "name": "BaseBdev1", 00:16:12.253 "uuid": "ae7d5508-37a4-4f90-9765-b4473b997ea9", 00:16:12.253 "is_configured": true, 00:16:12.253 "data_offset": 0, 00:16:12.253 "data_size": 65536 00:16:12.253 }, 00:16:12.253 { 00:16:12.253 "name": "BaseBdev2", 00:16:12.253 "uuid": "5b0a10d2-9124-4604-9ffa-3ec014149ee9", 00:16:12.253 "is_configured": true, 00:16:12.253 "data_offset": 0, 00:16:12.253 "data_size": 65536 00:16:12.253 } 00:16:12.253 ] 00:16:12.253 }' 00:16:12.253 22:24:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.253 22:24:11 -- common/autotest_common.sh@10 -- # set +x 00:16:12.819 22:24:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:13.077 [2024-07-14 22:24:12.023281] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.077 22:24:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.336 22:24:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.336 "name": "Existed_Raid", 00:16:13.336 "uuid": "c3fa4541-cb32-414b-89ac-f873b0defde3", 00:16:13.336 "strip_size_kb": 0, 00:16:13.336 "state": "online", 00:16:13.336 "raid_level": "raid1", 00:16:13.336 "superblock": false, 00:16:13.336 "num_base_bdevs": 2, 00:16:13.336 "num_base_bdevs_discovered": 1, 00:16:13.336 "num_base_bdevs_operational": 1, 00:16:13.336 "base_bdevs_list": [ 00:16:13.336 { 00:16:13.336 "name": null, 00:16:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.336 "is_configured": false, 00:16:13.336 "data_offset": 0, 00:16:13.336 "data_size": 65536 00:16:13.336 }, 00:16:13.336 { 00:16:13.336 "name": "BaseBdev2", 00:16:13.336 "uuid": "5b0a10d2-9124-4604-9ffa-3ec014149ee9", 00:16:13.336 "is_configured": true, 00:16:13.336 "data_offset": 0, 00:16:13.336 "data_size": 65536 00:16:13.336 } 00:16:13.336 ] 00:16:13.336 }' 00:16:13.336 22:24:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.336 22:24:12 -- common/autotest_common.sh@10 -- # set +x 00:16:13.902 22:24:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:13.902 22:24:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:13.903 22:24:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.903 22:24:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:14.467 22:24:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:14.467 22:24:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.467 22:24:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:14.467 [2024-07-14 22:24:13.416207] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.467 [2024-07-14 22:24:13.416242] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.467 [2024-07-14 22:24:13.416310] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.467 [2024-07-14 22:24:13.482477] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.467 [2024-07-14 22:24:13.482510] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:14.467 22:24:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:14.468 22:24:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:14.468 22:24:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.468 22:24:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:14.725 22:24:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:14.725 22:24:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:14.725 22:24:13 -- bdev/bdev_raid.sh@287 -- # killprocess 116619 00:16:14.725 22:24:13 -- common/autotest_common.sh@926 -- # '[' -z 116619 ']' 00:16:14.725 22:24:13 -- common/autotest_common.sh@930 -- # kill -0 116619 00:16:14.725 22:24:13 -- common/autotest_common.sh@931 -- # uname 00:16:14.725 22:24:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:14.725 22:24:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116619 00:16:14.725 killing process with pid 116619 00:16:14.725 22:24:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:14.725 22:24:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:14.725 22:24:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116619' 00:16:14.725 22:24:13 -- common/autotest_common.sh@945 -- # kill 116619 00:16:14.725 22:24:13 -- common/autotest_common.sh@950 -- # wait 116619 00:16:14.725 [2024-07-14 22:24:13.777427] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.725 [2024-07-14 22:24:13.777569] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.099 ************************************ 00:16:16.099 END TEST raid_state_function_test 00:16:16.099 ************************************ 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:16.099 00:16:16.099 real 0m10.103s 00:16:16.099 user 0m17.529s 00:16:16.099 sys 0m1.297s 00:16:16.099 22:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.099 22:24:14 -- common/autotest_common.sh@10 -- # set +x 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:16.099 22:24:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:16.099 22:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.099 22:24:14 -- common/autotest_common.sh@10 -- # set +x 00:16:16.099 ************************************ 00:16:16.099 START TEST raid_state_function_test_sb 00:16:16.099 ************************************ 00:16:16.099 22:24:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=116960 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116960' 00:16:16.099 Process raid pid: 116960 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116960 /var/tmp/spdk-raid.sock 00:16:16.099 22:24:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:16.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:16.099 22:24:14 -- common/autotest_common.sh@819 -- # '[' -z 116960 ']' 00:16:16.099 22:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:16.099 22:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.099 22:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:16.099 22:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.099 22:24:14 -- common/autotest_common.sh@10 -- # set +x 00:16:16.099 [2024-07-14 22:24:14.940985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:16.099 [2024-07-14 22:24:14.941200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.099 [2024-07-14 22:24:15.109800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.358 [2024-07-14 22:24:15.345158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.616 [2024-07-14 22:24:15.536501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.874 22:24:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:16.874 22:24:15 -- common/autotest_common.sh@852 -- # return 0 00:16:16.874 22:24:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:17.132 [2024-07-14 22:24:15.986402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.132 [2024-07-14 22:24:15.986498] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.132 [2024-07-14 22:24:15.986523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.132 [2024-07-14 22:24:15.986543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.132 22:24:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.390 22:24:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.390 "name": "Existed_Raid", 00:16:17.390 "uuid": "086daba7-4bea-47c0-afde-c42b1bb49bcb", 00:16:17.390 "strip_size_kb": 0, 00:16:17.390 "state": "configuring", 00:16:17.390 "raid_level": "raid1", 00:16:17.390 "superblock": true, 00:16:17.390 "num_base_bdevs": 2, 00:16:17.390 "num_base_bdevs_discovered": 0, 00:16:17.390 "num_base_bdevs_operational": 2, 00:16:17.390 "base_bdevs_list": [ 00:16:17.390 { 00:16:17.390 "name": "BaseBdev1", 00:16:17.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.390 "is_configured": false, 00:16:17.390 "data_offset": 0, 00:16:17.390 "data_size": 0 00:16:17.390 }, 00:16:17.390 { 00:16:17.390 "name": "BaseBdev2", 00:16:17.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.390 "is_configured": false, 00:16:17.390 "data_offset": 0, 00:16:17.390 "data_size": 0 00:16:17.390 } 00:16:17.390 ] 00:16:17.390 }' 00:16:17.390 22:24:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.390 22:24:16 -- common/autotest_common.sh@10 -- # set +x 00:16:17.955 22:24:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:17.955 [2024-07-14 22:24:17.002496] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.955 [2024-07-14 22:24:17.002545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:17.955 22:24:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:18.214 [2024-07-14 22:24:17.250621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.214 [2024-07-14 22:24:17.250712] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.214 [2024-07-14 22:24:17.250740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.214 [2024-07-14 22:24:17.250765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.214 22:24:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.472 [2024-07-14 22:24:17.471827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.472 BaseBdev1 00:16:18.472 22:24:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:18.472 22:24:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:18.472 22:24:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:18.472 22:24:17 -- common/autotest_common.sh@889 -- # local i 00:16:18.472 22:24:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:18.472 22:24:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:18.472 22:24:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.730 22:24:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.989 [ 00:16:18.989 { 00:16:18.989 "name": "BaseBdev1", 00:16:18.989 "aliases": [ 00:16:18.989 "6f79b0c0-1ace-4b28-9f09-106713301d15" 00:16:18.989 ], 00:16:18.989 "product_name": "Malloc disk", 00:16:18.989 "block_size": 512, 00:16:18.989 "num_blocks": 65536, 00:16:18.989 "uuid": "6f79b0c0-1ace-4b28-9f09-106713301d15", 00:16:18.989 "assigned_rate_limits": { 00:16:18.989 "rw_ios_per_sec": 0, 00:16:18.989 "rw_mbytes_per_sec": 0, 00:16:18.989 "r_mbytes_per_sec": 0, 00:16:18.989 "w_mbytes_per_sec": 0 00:16:18.989 }, 00:16:18.989 "claimed": true, 00:16:18.989 "claim_type": "exclusive_write", 00:16:18.989 "zoned": false, 00:16:18.989 "supported_io_types": { 00:16:18.989 "read": true, 00:16:18.989 "write": true, 00:16:18.989 "unmap": true, 00:16:18.989 "write_zeroes": true, 00:16:18.989 "flush": true, 00:16:18.989 "reset": true, 00:16:18.989 "compare": false, 00:16:18.989 "compare_and_write": false, 00:16:18.989 "abort": true, 00:16:18.989 "nvme_admin": false, 00:16:18.989 "nvme_io": false 00:16:18.989 }, 00:16:18.989 "memory_domains": [ 00:16:18.989 { 00:16:18.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.989 "dma_device_type": 2 00:16:18.989 } 00:16:18.989 ], 00:16:18.989 "driver_specific": {} 00:16:18.989 } 00:16:18.989 ] 00:16:18.989 22:24:17 -- common/autotest_common.sh@895 -- # return 0 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.989 22:24:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.248 22:24:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.248 "name": "Existed_Raid", 00:16:19.248 "uuid": "202b8933-4f82-4f68-b729-be278dba7390", 00:16:19.248 "strip_size_kb": 0, 00:16:19.248 "state": "configuring", 00:16:19.248 "raid_level": "raid1", 00:16:19.248 "superblock": true, 00:16:19.248 "num_base_bdevs": 2, 00:16:19.248 "num_base_bdevs_discovered": 1, 00:16:19.248 "num_base_bdevs_operational": 2, 00:16:19.248 "base_bdevs_list": [ 00:16:19.248 { 00:16:19.248 "name": "BaseBdev1", 00:16:19.248 "uuid": "6f79b0c0-1ace-4b28-9f09-106713301d15", 00:16:19.248 "is_configured": true, 00:16:19.248 "data_offset": 2048, 00:16:19.248 "data_size": 63488 00:16:19.248 }, 00:16:19.248 { 00:16:19.248 "name": "BaseBdev2", 00:16:19.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.248 "is_configured": false, 00:16:19.248 "data_offset": 0, 00:16:19.248 "data_size": 0 00:16:19.248 } 00:16:19.248 ] 00:16:19.248 }' 00:16:19.248 22:24:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.248 22:24:18 -- common/autotest_common.sh@10 -- # set +x 00:16:19.814 22:24:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.074 [2024-07-14 22:24:19.016266] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.074 [2024-07-14 22:24:19.016344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:20.074 22:24:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:20.074 22:24:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:20.332 22:24:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.591 BaseBdev1 00:16:20.591 22:24:19 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:20.591 22:24:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:20.591 22:24:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:20.591 22:24:19 -- common/autotest_common.sh@889 -- # local i 00:16:20.591 22:24:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:20.591 22:24:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:20.591 22:24:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.849 22:24:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.117 [ 00:16:21.118 { 00:16:21.118 "name": "BaseBdev1", 00:16:21.118 "aliases": [ 00:16:21.118 "ceb10a6e-4461-4903-99a0-e1b6040027ff" 00:16:21.118 ], 00:16:21.118 "product_name": "Malloc disk", 00:16:21.118 "block_size": 512, 00:16:21.118 "num_blocks": 65536, 00:16:21.118 "uuid": "ceb10a6e-4461-4903-99a0-e1b6040027ff", 00:16:21.118 "assigned_rate_limits": { 00:16:21.118 "rw_ios_per_sec": 0, 00:16:21.118 "rw_mbytes_per_sec": 0, 00:16:21.118 "r_mbytes_per_sec": 0, 00:16:21.118 "w_mbytes_per_sec": 0 00:16:21.118 }, 00:16:21.118 "claimed": false, 00:16:21.118 "zoned": false, 00:16:21.118 "supported_io_types": { 00:16:21.118 "read": true, 00:16:21.118 "write": true, 00:16:21.118 "unmap": true, 00:16:21.118 "write_zeroes": true, 00:16:21.118 "flush": true, 00:16:21.118 "reset": true, 00:16:21.118 "compare": false, 00:16:21.118 "compare_and_write": false, 00:16:21.118 "abort": true, 00:16:21.118 "nvme_admin": false, 00:16:21.118 "nvme_io": false 00:16:21.118 }, 00:16:21.118 "memory_domains": [ 00:16:21.118 { 00:16:21.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.118 "dma_device_type": 2 00:16:21.118 } 00:16:21.118 ], 00:16:21.118 "driver_specific": {} 00:16:21.118 } 00:16:21.118 ] 00:16:21.118 22:24:19 -- common/autotest_common.sh@895 -- # return 0 00:16:21.118 22:24:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:21.118 [2024-07-14 22:24:20.156437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.118 [2024-07-14 22:24:20.158577] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.118 [2024-07-14 22:24:20.158647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.422 "name": "Existed_Raid", 00:16:21.422 "uuid": "26084f3e-2997-4af3-acc4-198b1f345443", 00:16:21.422 "strip_size_kb": 0, 00:16:21.422 "state": "configuring", 00:16:21.422 "raid_level": "raid1", 00:16:21.422 "superblock": true, 00:16:21.422 "num_base_bdevs": 2, 00:16:21.422 "num_base_bdevs_discovered": 1, 00:16:21.422 "num_base_bdevs_operational": 2, 00:16:21.422 "base_bdevs_list": [ 00:16:21.422 { 00:16:21.422 "name": "BaseBdev1", 00:16:21.422 "uuid": "ceb10a6e-4461-4903-99a0-e1b6040027ff", 00:16:21.422 "is_configured": true, 00:16:21.422 "data_offset": 2048, 00:16:21.422 "data_size": 63488 00:16:21.422 }, 00:16:21.422 { 00:16:21.422 "name": "BaseBdev2", 00:16:21.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.422 "is_configured": false, 00:16:21.422 "data_offset": 0, 00:16:21.422 "data_size": 0 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 }' 00:16:21.422 22:24:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.422 22:24:20 -- common/autotest_common.sh@10 -- # set +x 00:16:22.355 22:24:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.355 [2024-07-14 22:24:21.406346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.355 [2024-07-14 22:24:21.406646] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:22.355 [2024-07-14 22:24:21.406679] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.355 [2024-07-14 22:24:21.406792] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:22.356 BaseBdev2 00:16:22.356 [2024-07-14 22:24:21.407178] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:22.356 [2024-07-14 22:24:21.407199] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:22.356 [2024-07-14 22:24:21.407349] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.614 22:24:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:22.614 22:24:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:22.614 22:24:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:22.614 22:24:21 -- common/autotest_common.sh@889 -- # local i 00:16:22.614 22:24:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:22.614 22:24:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:22.614 22:24:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.614 22:24:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.873 [ 00:16:22.873 { 00:16:22.873 "name": "BaseBdev2", 00:16:22.873 "aliases": [ 00:16:22.873 "bb037e8f-889f-4b1b-a8d2-4928e88aed40" 00:16:22.873 ], 00:16:22.873 "product_name": "Malloc disk", 00:16:22.873 "block_size": 512, 00:16:22.873 "num_blocks": 65536, 00:16:22.873 "uuid": "bb037e8f-889f-4b1b-a8d2-4928e88aed40", 00:16:22.873 "assigned_rate_limits": { 00:16:22.873 "rw_ios_per_sec": 0, 00:16:22.873 "rw_mbytes_per_sec": 0, 00:16:22.873 "r_mbytes_per_sec": 0, 00:16:22.873 "w_mbytes_per_sec": 0 00:16:22.873 }, 00:16:22.873 "claimed": true, 00:16:22.873 "claim_type": "exclusive_write", 00:16:22.873 "zoned": false, 00:16:22.873 "supported_io_types": { 00:16:22.873 "read": true, 00:16:22.873 "write": true, 00:16:22.873 "unmap": true, 00:16:22.873 "write_zeroes": true, 00:16:22.873 "flush": true, 00:16:22.873 "reset": true, 00:16:22.873 "compare": false, 00:16:22.873 "compare_and_write": false, 00:16:22.873 "abort": true, 00:16:22.873 "nvme_admin": false, 00:16:22.873 "nvme_io": false 00:16:22.873 }, 00:16:22.873 "memory_domains": [ 00:16:22.873 { 00:16:22.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.873 "dma_device_type": 2 00:16:22.873 } 00:16:22.873 ], 00:16:22.873 "driver_specific": {} 00:16:22.873 } 00:16:22.873 ] 00:16:22.873 22:24:21 -- common/autotest_common.sh@895 -- # return 0 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.873 22:24:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.132 22:24:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.132 "name": "Existed_Raid", 00:16:23.132 "uuid": "26084f3e-2997-4af3-acc4-198b1f345443", 00:16:23.132 "strip_size_kb": 0, 00:16:23.132 "state": "online", 00:16:23.132 "raid_level": "raid1", 00:16:23.132 "superblock": true, 00:16:23.132 "num_base_bdevs": 2, 00:16:23.132 "num_base_bdevs_discovered": 2, 00:16:23.132 "num_base_bdevs_operational": 2, 00:16:23.132 "base_bdevs_list": [ 00:16:23.132 { 00:16:23.132 "name": "BaseBdev1", 00:16:23.132 "uuid": "ceb10a6e-4461-4903-99a0-e1b6040027ff", 00:16:23.132 "is_configured": true, 00:16:23.132 "data_offset": 2048, 00:16:23.132 "data_size": 63488 00:16:23.132 }, 00:16:23.132 { 00:16:23.132 "name": "BaseBdev2", 00:16:23.132 "uuid": "bb037e8f-889f-4b1b-a8d2-4928e88aed40", 00:16:23.132 "is_configured": true, 00:16:23.132 "data_offset": 2048, 00:16:23.132 "data_size": 63488 00:16:23.132 } 00:16:23.132 ] 00:16:23.132 }' 00:16:23.132 22:24:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.132 22:24:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 22:24:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:23.957 [2024-07-14 22:24:22.930831] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.957 22:24:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:23.957 22:24:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:23.957 22:24:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.958 22:24:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.216 22:24:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.216 "name": "Existed_Raid", 00:16:24.216 "uuid": "26084f3e-2997-4af3-acc4-198b1f345443", 00:16:24.216 "strip_size_kb": 0, 00:16:24.216 "state": "online", 00:16:24.216 "raid_level": "raid1", 00:16:24.216 "superblock": true, 00:16:24.216 "num_base_bdevs": 2, 00:16:24.216 "num_base_bdevs_discovered": 1, 00:16:24.216 "num_base_bdevs_operational": 1, 00:16:24.216 "base_bdevs_list": [ 00:16:24.216 { 00:16:24.216 "name": null, 00:16:24.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.216 "is_configured": false, 00:16:24.216 "data_offset": 2048, 00:16:24.216 "data_size": 63488 00:16:24.216 }, 00:16:24.216 { 00:16:24.216 "name": "BaseBdev2", 00:16:24.216 "uuid": "bb037e8f-889f-4b1b-a8d2-4928e88aed40", 00:16:24.216 "is_configured": true, 00:16:24.216 "data_offset": 2048, 00:16:24.216 "data_size": 63488 00:16:24.216 } 00:16:24.216 ] 00:16:24.216 }' 00:16:24.216 22:24:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.216 22:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:25.148 22:24:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:25.149 22:24:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.149 22:24:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.149 22:24:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:25.149 22:24:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:25.149 22:24:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.149 22:24:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:25.407 [2024-07-14 22:24:24.369834] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.407 [2024-07-14 22:24:24.369872] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.407 [2024-07-14 22:24:24.369939] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.407 [2024-07-14 22:24:24.438643] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.407 [2024-07-14 22:24:24.438681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:25.407 22:24:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.407 22:24:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.407 22:24:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.407 22:24:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.665 22:24:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:25.665 22:24:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:25.665 22:24:24 -- bdev/bdev_raid.sh@287 -- # killprocess 116960 00:16:25.665 22:24:24 -- common/autotest_common.sh@926 -- # '[' -z 116960 ']' 00:16:25.665 22:24:24 -- common/autotest_common.sh@930 -- # kill -0 116960 00:16:25.665 22:24:24 -- common/autotest_common.sh@931 -- # uname 00:16:25.665 22:24:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.665 22:24:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116960 00:16:25.665 22:24:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:25.666 killing process with pid 116960 00:16:25.666 22:24:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:25.666 22:24:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116960' 00:16:25.666 22:24:24 -- common/autotest_common.sh@945 -- # kill 116960 00:16:25.666 22:24:24 -- common/autotest_common.sh@950 -- # wait 116960 00:16:25.666 [2024-07-14 22:24:24.725835] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.666 [2024-07-14 22:24:24.725949] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.042 ************************************ 00:16:27.042 END TEST raid_state_function_test_sb 00:16:27.042 ************************************ 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:27.042 00:16:27.042 real 0m10.903s 00:16:27.042 user 0m18.946s 00:16:27.042 sys 0m1.334s 00:16:27.042 22:24:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.042 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:27.042 22:24:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:27.042 22:24:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.042 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 ************************************ 00:16:27.042 START TEST raid_superblock_test 00:16:27.042 ************************************ 00:16:27.042 22:24:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=117318 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117318 /var/tmp/spdk-raid.sock 00:16:27.042 22:24:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:27.042 22:24:25 -- common/autotest_common.sh@819 -- # '[' -z 117318 ']' 00:16:27.042 22:24:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:27.042 22:24:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.042 22:24:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:27.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:27.042 22:24:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.042 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 [2024-07-14 22:24:25.898877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:27.042 [2024-07-14 22:24:25.899085] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117318 ] 00:16:27.042 [2024-07-14 22:24:26.068593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.301 [2024-07-14 22:24:26.279311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.560 [2024-07-14 22:24:26.472269] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.819 22:24:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.819 22:24:26 -- common/autotest_common.sh@852 -- # return 0 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.819 22:24:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:28.078 malloc1 00:16:28.078 22:24:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.336 [2024-07-14 22:24:27.253357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.336 [2024-07-14 22:24:27.253517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.336 [2024-07-14 22:24:27.253559] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:28.336 [2024-07-14 22:24:27.253611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.336 [2024-07-14 22:24:27.256140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.336 [2024-07-14 22:24:27.256201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.336 pt1 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.336 22:24:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:28.594 malloc2 00:16:28.594 22:24:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.852 [2024-07-14 22:24:27.700635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.852 [2024-07-14 22:24:27.700708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.852 [2024-07-14 22:24:27.700751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:28.852 [2024-07-14 22:24:27.700805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.852 [2024-07-14 22:24:27.703148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.852 [2024-07-14 22:24:27.703193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.852 pt2 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:28.852 [2024-07-14 22:24:27.888722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.852 [2024-07-14 22:24:27.890832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.852 [2024-07-14 22:24:27.891023] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:28.852 [2024-07-14 22:24:27.891038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:28.852 [2024-07-14 22:24:27.891174] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:28.852 [2024-07-14 22:24:27.891567] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:28.852 [2024-07-14 22:24:27.891587] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:28.852 [2024-07-14 22:24:27.891729] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.852 22:24:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.110 22:24:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.110 "name": "raid_bdev1", 00:16:29.110 "uuid": "b23d858e-fe79-4ac6-92ba-79b906297cd9", 00:16:29.110 "strip_size_kb": 0, 00:16:29.110 "state": "online", 00:16:29.110 "raid_level": "raid1", 00:16:29.110 "superblock": true, 00:16:29.110 "num_base_bdevs": 2, 00:16:29.110 "num_base_bdevs_discovered": 2, 00:16:29.110 "num_base_bdevs_operational": 2, 00:16:29.110 "base_bdevs_list": [ 00:16:29.110 { 00:16:29.110 "name": "pt1", 00:16:29.110 "uuid": "3e4eea7f-f0a6-531e-9614-2011f321c822", 00:16:29.110 "is_configured": true, 00:16:29.110 "data_offset": 2048, 00:16:29.110 "data_size": 63488 00:16:29.110 }, 00:16:29.110 { 00:16:29.110 "name": "pt2", 00:16:29.110 "uuid": "bafa4fb1-115f-5233-b39c-c181ce30d1cb", 00:16:29.110 "is_configured": true, 00:16:29.110 "data_offset": 2048, 00:16:29.110 "data_size": 63488 00:16:29.110 } 00:16:29.110 ] 00:16:29.110 }' 00:16:29.110 22:24:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.110 22:24:28 -- common/autotest_common.sh@10 -- # set +x 00:16:30.043 22:24:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:30.043 22:24:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:30.043 [2024-07-14 22:24:28.957155] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.043 22:24:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b23d858e-fe79-4ac6-92ba-79b906297cd9 00:16:30.043 22:24:28 -- bdev/bdev_raid.sh@380 -- # '[' -z b23d858e-fe79-4ac6-92ba-79b906297cd9 ']' 00:16:30.043 22:24:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.301 [2024-07-14 22:24:29.192921] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.301 [2024-07-14 22:24:29.192946] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.301 [2024-07-14 22:24:29.193042] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.301 [2024-07-14 22:24:29.193152] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.301 [2024-07-14 22:24:29.193165] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:30.301 22:24:29 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.301 22:24:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:30.560 22:24:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:30.560 22:24:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:30.560 22:24:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.560 22:24:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:30.560 22:24:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.560 22:24:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:30.819 22:24:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:30.819 22:24:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.077 22:24:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:31.077 22:24:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:31.077 22:24:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:31.077 22:24:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:31.077 22:24:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.077 22:24:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.077 22:24:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.077 22:24:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.077 22:24:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.077 22:24:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.077 22:24:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.077 22:24:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:31.077 22:24:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:31.337 [2024-07-14 22:24:30.169228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.337 [2024-07-14 22:24:30.171111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.337 [2024-07-14 22:24:30.171201] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:31.337 [2024-07-14 22:24:30.171292] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:31.337 [2024-07-14 22:24:30.171328] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.337 [2024-07-14 22:24:30.171354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:31.337 request: 00:16:31.337 { 00:16:31.337 "name": "raid_bdev1", 00:16:31.337 "raid_level": "raid1", 00:16:31.337 "base_bdevs": [ 00:16:31.337 "malloc1", 00:16:31.337 "malloc2" 00:16:31.337 ], 00:16:31.337 "superblock": false, 00:16:31.337 "method": "bdev_raid_create", 00:16:31.337 "req_id": 1 00:16:31.337 } 00:16:31.337 Got JSON-RPC error response 00:16:31.337 response: 00:16:31.337 { 00:16:31.337 "code": -17, 00:16:31.337 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.337 } 00:16:31.337 22:24:30 -- common/autotest_common.sh@643 -- # es=1 00:16:31.337 22:24:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:31.337 22:24:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:31.337 22:24:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:31.337 22:24:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.337 22:24:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:31.337 22:24:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:31.337 22:24:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:31.337 22:24:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.596 [2024-07-14 22:24:30.545260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.596 [2024-07-14 22:24:30.545403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.596 [2024-07-14 22:24:30.545463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:31.596 [2024-07-14 22:24:30.545491] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.596 [2024-07-14 22:24:30.548024] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.596 [2024-07-14 22:24:30.548093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.596 [2024-07-14 22:24:30.548199] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:31.596 [2024-07-14 22:24:30.548290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.596 pt1 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.597 22:24:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.855 22:24:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.856 "name": "raid_bdev1", 00:16:31.856 "uuid": "b23d858e-fe79-4ac6-92ba-79b906297cd9", 00:16:31.856 "strip_size_kb": 0, 00:16:31.856 "state": "configuring", 00:16:31.856 "raid_level": "raid1", 00:16:31.856 "superblock": true, 00:16:31.856 "num_base_bdevs": 2, 00:16:31.856 "num_base_bdevs_discovered": 1, 00:16:31.856 "num_base_bdevs_operational": 2, 00:16:31.856 "base_bdevs_list": [ 00:16:31.856 { 00:16:31.856 "name": "pt1", 00:16:31.856 "uuid": "3e4eea7f-f0a6-531e-9614-2011f321c822", 00:16:31.856 "is_configured": true, 00:16:31.856 "data_offset": 2048, 00:16:31.856 "data_size": 63488 00:16:31.856 }, 00:16:31.856 { 00:16:31.856 "name": null, 00:16:31.856 "uuid": "bafa4fb1-115f-5233-b39c-c181ce30d1cb", 00:16:31.856 "is_configured": false, 00:16:31.856 "data_offset": 2048, 00:16:31.856 "data_size": 63488 00:16:31.856 } 00:16:31.856 ] 00:16:31.856 }' 00:16:31.856 22:24:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.856 22:24:30 -- common/autotest_common.sh@10 -- # set +x 00:16:32.422 22:24:31 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:32.422 22:24:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:32.422 22:24:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:32.422 22:24:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.681 [2024-07-14 22:24:31.625545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.681 [2024-07-14 22:24:31.625684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.681 [2024-07-14 22:24:31.625727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:32.681 [2024-07-14 22:24:31.625754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.681 [2024-07-14 22:24:31.626327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.681 [2024-07-14 22:24:31.626371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.681 [2024-07-14 22:24:31.626476] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:32.681 [2024-07-14 22:24:31.626504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.681 [2024-07-14 22:24:31.626663] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:32.681 [2024-07-14 22:24:31.626685] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:32.681 [2024-07-14 22:24:31.626808] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:32.681 [2024-07-14 22:24:31.627140] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:32.681 [2024-07-14 22:24:31.627163] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:32.681 [2024-07-14 22:24:31.627314] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.681 pt2 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.681 22:24:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.940 22:24:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.940 "name": "raid_bdev1", 00:16:32.940 "uuid": "b23d858e-fe79-4ac6-92ba-79b906297cd9", 00:16:32.940 "strip_size_kb": 0, 00:16:32.940 "state": "online", 00:16:32.940 "raid_level": "raid1", 00:16:32.940 "superblock": true, 00:16:32.940 "num_base_bdevs": 2, 00:16:32.940 "num_base_bdevs_discovered": 2, 00:16:32.940 "num_base_bdevs_operational": 2, 00:16:32.940 "base_bdevs_list": [ 00:16:32.940 { 00:16:32.940 "name": "pt1", 00:16:32.940 "uuid": "3e4eea7f-f0a6-531e-9614-2011f321c822", 00:16:32.940 "is_configured": true, 00:16:32.940 "data_offset": 2048, 00:16:32.940 "data_size": 63488 00:16:32.940 }, 00:16:32.940 { 00:16:32.940 "name": "pt2", 00:16:32.940 "uuid": "bafa4fb1-115f-5233-b39c-c181ce30d1cb", 00:16:32.940 "is_configured": true, 00:16:32.940 "data_offset": 2048, 00:16:32.940 "data_size": 63488 00:16:32.940 } 00:16:32.940 ] 00:16:32.940 }' 00:16:32.940 22:24:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.940 22:24:31 -- common/autotest_common.sh@10 -- # set +x 00:16:33.519 22:24:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:33.519 22:24:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:33.804 [2024-07-14 22:24:32.646043] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.804 22:24:32 -- bdev/bdev_raid.sh@430 -- # '[' b23d858e-fe79-4ac6-92ba-79b906297cd9 '!=' b23d858e-fe79-4ac6-92ba-79b906297cd9 ']' 00:16:33.804 22:24:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:33.804 22:24:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:33.804 22:24:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:33.804 22:24:32 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:34.062 [2024-07-14 22:24:32.881889] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.062 22:24:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.320 22:24:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.320 "name": "raid_bdev1", 00:16:34.320 "uuid": "b23d858e-fe79-4ac6-92ba-79b906297cd9", 00:16:34.320 "strip_size_kb": 0, 00:16:34.320 "state": "online", 00:16:34.320 "raid_level": "raid1", 00:16:34.320 "superblock": true, 00:16:34.320 "num_base_bdevs": 2, 00:16:34.320 "num_base_bdevs_discovered": 1, 00:16:34.320 "num_base_bdevs_operational": 1, 00:16:34.320 "base_bdevs_list": [ 00:16:34.320 { 00:16:34.320 "name": null, 00:16:34.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.320 "is_configured": false, 00:16:34.320 "data_offset": 2048, 00:16:34.320 "data_size": 63488 00:16:34.320 }, 00:16:34.320 { 00:16:34.320 "name": "pt2", 00:16:34.320 "uuid": "bafa4fb1-115f-5233-b39c-c181ce30d1cb", 00:16:34.320 "is_configured": true, 00:16:34.320 "data_offset": 2048, 00:16:34.320 "data_size": 63488 00:16:34.320 } 00:16:34.320 ] 00:16:34.320 }' 00:16:34.320 22:24:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.320 22:24:33 -- common/autotest_common.sh@10 -- # set +x 00:16:34.884 22:24:33 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:35.141 [2024-07-14 22:24:33.974107] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.141 [2024-07-14 22:24:33.974143] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.141 [2024-07-14 22:24:33.974228] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.141 [2024-07-14 22:24:33.974291] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.141 [2024-07-14 22:24:33.974305] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:35.141 22:24:33 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.141 22:24:33 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:35.141 22:24:34 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:35.141 22:24:34 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:35.141 22:24:34 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:35.141 22:24:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:35.141 22:24:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:35.399 22:24:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:35.399 22:24:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:35.399 22:24:34 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:35.399 22:24:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:35.399 22:24:34 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:35.399 22:24:34 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.658 [2024-07-14 22:24:34.606167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.658 [2024-07-14 22:24:34.606251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.658 [2024-07-14 22:24:34.606287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:35.658 [2024-07-14 22:24:34.606325] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.658 [2024-07-14 22:24:34.608491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.658 [2024-07-14 22:24:34.608542] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.658 [2024-07-14 22:24:34.608641] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:35.658 [2024-07-14 22:24:34.608701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.658 [2024-07-14 22:24:34.608810] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:35.658 [2024-07-14 22:24:34.608824] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:35.658 [2024-07-14 22:24:34.608920] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:35.658 [2024-07-14 22:24:34.609245] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:35.658 [2024-07-14 22:24:34.609269] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:35.658 [2024-07-14 22:24:34.609392] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.658 pt2 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.658 22:24:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.917 22:24:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.917 "name": "raid_bdev1", 00:16:35.917 "uuid": "b23d858e-fe79-4ac6-92ba-79b906297cd9", 00:16:35.917 "strip_size_kb": 0, 00:16:35.917 "state": "online", 00:16:35.917 "raid_level": "raid1", 00:16:35.917 "superblock": true, 00:16:35.917 "num_base_bdevs": 2, 00:16:35.917 "num_base_bdevs_discovered": 1, 00:16:35.917 "num_base_bdevs_operational": 1, 00:16:35.917 "base_bdevs_list": [ 00:16:35.917 { 00:16:35.917 "name": null, 00:16:35.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.917 "is_configured": false, 00:16:35.917 "data_offset": 2048, 00:16:35.917 "data_size": 63488 00:16:35.917 }, 00:16:35.917 { 00:16:35.917 "name": "pt2", 00:16:35.917 "uuid": "bafa4fb1-115f-5233-b39c-c181ce30d1cb", 00:16:35.917 "is_configured": true, 00:16:35.917 "data_offset": 2048, 00:16:35.917 "data_size": 63488 00:16:35.917 } 00:16:35.917 ] 00:16:35.917 }' 00:16:35.917 22:24:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.917 22:24:34 -- common/autotest_common.sh@10 -- # set +x 00:16:36.484 22:24:35 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:36.484 22:24:35 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:36.484 22:24:35 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:36.742 [2024-07-14 22:24:35.642545] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.742 22:24:35 -- bdev/bdev_raid.sh@506 -- # '[' b23d858e-fe79-4ac6-92ba-79b906297cd9 '!=' b23d858e-fe79-4ac6-92ba-79b906297cd9 ']' 00:16:36.742 22:24:35 -- bdev/bdev_raid.sh@511 -- # killprocess 117318 00:16:36.742 22:24:35 -- common/autotest_common.sh@926 -- # '[' -z 117318 ']' 00:16:36.742 22:24:35 -- common/autotest_common.sh@930 -- # kill -0 117318 00:16:36.742 22:24:35 -- common/autotest_common.sh@931 -- # uname 00:16:36.742 22:24:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.742 22:24:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117318 00:16:36.742 killing process with pid 117318 00:16:36.742 22:24:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:36.742 22:24:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:36.742 22:24:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117318' 00:16:36.742 22:24:35 -- common/autotest_common.sh@945 -- # kill 117318 00:16:36.742 22:24:35 -- common/autotest_common.sh@950 -- # wait 117318 00:16:36.742 [2024-07-14 22:24:35.678425] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.742 [2024-07-14 22:24:35.678493] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.742 [2024-07-14 22:24:35.678578] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.742 [2024-07-14 22:24:35.678590] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:37.001 [2024-07-14 22:24:35.804634] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.936 ************************************ 00:16:37.936 END TEST raid_superblock_test 00:16:37.936 ************************************ 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:37.936 00:16:37.936 real 0m10.907s 00:16:37.936 user 0m19.489s 00:16:37.936 sys 0m1.295s 00:16:37.936 22:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.936 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:37.936 22:24:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:37.936 22:24:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.936 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 ************************************ 00:16:37.936 START TEST raid_state_function_test 00:16:37.936 ************************************ 00:16:37.936 22:24:36 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=117683 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117683' 00:16:37.936 Process raid pid: 117683 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117683 /var/tmp/spdk-raid.sock 00:16:37.936 22:24:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:37.936 22:24:36 -- common/autotest_common.sh@819 -- # '[' -z 117683 ']' 00:16:37.936 22:24:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.936 22:24:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.936 22:24:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.936 22:24:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.936 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 [2024-07-14 22:24:36.871142] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:37.936 [2024-07-14 22:24:36.871345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.194 [2024-07-14 22:24:37.035874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.194 [2024-07-14 22:24:37.192962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.452 [2024-07-14 22:24:37.358436] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.018 22:24:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.018 22:24:37 -- common/autotest_common.sh@852 -- # return 0 00:16:39.018 22:24:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:39.018 [2024-07-14 22:24:38.044516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.018 [2024-07-14 22:24:38.044613] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.018 [2024-07-14 22:24:38.044628] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.018 [2024-07-14 22:24:38.044648] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.018 [2024-07-14 22:24:38.044656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.018 [2024-07-14 22:24:38.044699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.018 22:24:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.276 22:24:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.276 "name": "Existed_Raid", 00:16:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.276 "strip_size_kb": 64, 00:16:39.276 "state": "configuring", 00:16:39.276 "raid_level": "raid0", 00:16:39.276 "superblock": false, 00:16:39.276 "num_base_bdevs": 3, 00:16:39.276 "num_base_bdevs_discovered": 0, 00:16:39.276 "num_base_bdevs_operational": 3, 00:16:39.276 "base_bdevs_list": [ 00:16:39.276 { 00:16:39.276 "name": "BaseBdev1", 00:16:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.276 "is_configured": false, 00:16:39.277 "data_offset": 0, 00:16:39.277 "data_size": 0 00:16:39.277 }, 00:16:39.277 { 00:16:39.277 "name": "BaseBdev2", 00:16:39.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.277 "is_configured": false, 00:16:39.277 "data_offset": 0, 00:16:39.277 "data_size": 0 00:16:39.277 }, 00:16:39.277 { 00:16:39.277 "name": "BaseBdev3", 00:16:39.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.277 "is_configured": false, 00:16:39.277 "data_offset": 0, 00:16:39.277 "data_size": 0 00:16:39.277 } 00:16:39.277 ] 00:16:39.277 }' 00:16:39.277 22:24:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.277 22:24:38 -- common/autotest_common.sh@10 -- # set +x 00:16:40.209 22:24:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:40.209 [2024-07-14 22:24:39.096612] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.209 [2024-07-14 22:24:39.096679] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:40.209 22:24:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:40.467 [2024-07-14 22:24:39.340665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.467 [2024-07-14 22:24:39.340747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.467 [2024-07-14 22:24:39.340769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.467 [2024-07-14 22:24:39.340789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.467 [2024-07-14 22:24:39.340797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.467 [2024-07-14 22:24:39.340835] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.467 22:24:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.724 [2024-07-14 22:24:39.567003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.724 BaseBdev1 00:16:40.724 22:24:39 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:40.724 22:24:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:40.724 22:24:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:40.724 22:24:39 -- common/autotest_common.sh@889 -- # local i 00:16:40.724 22:24:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:40.724 22:24:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:40.724 22:24:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.724 22:24:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.981 [ 00:16:40.981 { 00:16:40.981 "name": "BaseBdev1", 00:16:40.981 "aliases": [ 00:16:40.981 "47ece943-270c-4df7-98c1-0bc963e66f80" 00:16:40.981 ], 00:16:40.981 "product_name": "Malloc disk", 00:16:40.981 "block_size": 512, 00:16:40.981 "num_blocks": 65536, 00:16:40.981 "uuid": "47ece943-270c-4df7-98c1-0bc963e66f80", 00:16:40.981 "assigned_rate_limits": { 00:16:40.981 "rw_ios_per_sec": 0, 00:16:40.981 "rw_mbytes_per_sec": 0, 00:16:40.981 "r_mbytes_per_sec": 0, 00:16:40.981 "w_mbytes_per_sec": 0 00:16:40.981 }, 00:16:40.981 "claimed": true, 00:16:40.981 "claim_type": "exclusive_write", 00:16:40.981 "zoned": false, 00:16:40.981 "supported_io_types": { 00:16:40.981 "read": true, 00:16:40.981 "write": true, 00:16:40.981 "unmap": true, 00:16:40.981 "write_zeroes": true, 00:16:40.981 "flush": true, 00:16:40.981 "reset": true, 00:16:40.981 "compare": false, 00:16:40.981 "compare_and_write": false, 00:16:40.981 "abort": true, 00:16:40.981 "nvme_admin": false, 00:16:40.981 "nvme_io": false 00:16:40.981 }, 00:16:40.981 "memory_domains": [ 00:16:40.981 { 00:16:40.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.981 "dma_device_type": 2 00:16:40.981 } 00:16:40.981 ], 00:16:40.981 "driver_specific": {} 00:16:40.981 } 00:16:40.981 ] 00:16:40.981 22:24:39 -- common/autotest_common.sh@895 -- # return 0 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.981 22:24:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.238 22:24:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.238 "name": "Existed_Raid", 00:16:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.238 "strip_size_kb": 64, 00:16:41.238 "state": "configuring", 00:16:41.238 "raid_level": "raid0", 00:16:41.238 "superblock": false, 00:16:41.238 "num_base_bdevs": 3, 00:16:41.238 "num_base_bdevs_discovered": 1, 00:16:41.238 "num_base_bdevs_operational": 3, 00:16:41.238 "base_bdevs_list": [ 00:16:41.238 { 00:16:41.238 "name": "BaseBdev1", 00:16:41.238 "uuid": "47ece943-270c-4df7-98c1-0bc963e66f80", 00:16:41.238 "is_configured": true, 00:16:41.238 "data_offset": 0, 00:16:41.238 "data_size": 65536 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "name": "BaseBdev2", 00:16:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.238 "is_configured": false, 00:16:41.238 "data_offset": 0, 00:16:41.238 "data_size": 0 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "name": "BaseBdev3", 00:16:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.238 "is_configured": false, 00:16:41.238 "data_offset": 0, 00:16:41.238 "data_size": 0 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }' 00:16:41.238 22:24:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.238 22:24:40 -- common/autotest_common.sh@10 -- # set +x 00:16:42.169 22:24:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.169 [2024-07-14 22:24:41.131354] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.169 [2024-07-14 22:24:41.131436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:42.169 22:24:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:42.169 22:24:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:42.427 [2024-07-14 22:24:41.323405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.427 [2024-07-14 22:24:41.325417] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.427 [2024-07-14 22:24:41.325531] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.427 [2024-07-14 22:24:41.325561] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.427 [2024-07-14 22:24:41.325588] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.427 22:24:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.685 22:24:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.685 "name": "Existed_Raid", 00:16:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.685 "strip_size_kb": 64, 00:16:42.685 "state": "configuring", 00:16:42.685 "raid_level": "raid0", 00:16:42.685 "superblock": false, 00:16:42.685 "num_base_bdevs": 3, 00:16:42.685 "num_base_bdevs_discovered": 1, 00:16:42.685 "num_base_bdevs_operational": 3, 00:16:42.685 "base_bdevs_list": [ 00:16:42.685 { 00:16:42.685 "name": "BaseBdev1", 00:16:42.685 "uuid": "47ece943-270c-4df7-98c1-0bc963e66f80", 00:16:42.685 "is_configured": true, 00:16:42.685 "data_offset": 0, 00:16:42.685 "data_size": 65536 00:16:42.685 }, 00:16:42.685 { 00:16:42.685 "name": "BaseBdev2", 00:16:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.685 "is_configured": false, 00:16:42.685 "data_offset": 0, 00:16:42.685 "data_size": 0 00:16:42.685 }, 00:16:42.685 { 00:16:42.685 "name": "BaseBdev3", 00:16:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.685 "is_configured": false, 00:16:42.685 "data_offset": 0, 00:16:42.685 "data_size": 0 00:16:42.685 } 00:16:42.685 ] 00:16:42.685 }' 00:16:42.685 22:24:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.685 22:24:41 -- common/autotest_common.sh@10 -- # set +x 00:16:43.251 22:24:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.509 [2024-07-14 22:24:42.538944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.509 BaseBdev2 00:16:43.509 22:24:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:43.509 22:24:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:43.509 22:24:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:43.509 22:24:42 -- common/autotest_common.sh@889 -- # local i 00:16:43.509 22:24:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:43.509 22:24:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:43.509 22:24:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.768 22:24:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.026 [ 00:16:44.026 { 00:16:44.026 "name": "BaseBdev2", 00:16:44.026 "aliases": [ 00:16:44.026 "4638cf59-2c9c-4284-b27d-501e783e55c4" 00:16:44.026 ], 00:16:44.026 "product_name": "Malloc disk", 00:16:44.026 "block_size": 512, 00:16:44.026 "num_blocks": 65536, 00:16:44.026 "uuid": "4638cf59-2c9c-4284-b27d-501e783e55c4", 00:16:44.026 "assigned_rate_limits": { 00:16:44.026 "rw_ios_per_sec": 0, 00:16:44.026 "rw_mbytes_per_sec": 0, 00:16:44.026 "r_mbytes_per_sec": 0, 00:16:44.026 "w_mbytes_per_sec": 0 00:16:44.026 }, 00:16:44.026 "claimed": true, 00:16:44.026 "claim_type": "exclusive_write", 00:16:44.026 "zoned": false, 00:16:44.026 "supported_io_types": { 00:16:44.026 "read": true, 00:16:44.026 "write": true, 00:16:44.026 "unmap": true, 00:16:44.026 "write_zeroes": true, 00:16:44.026 "flush": true, 00:16:44.026 "reset": true, 00:16:44.026 "compare": false, 00:16:44.026 "compare_and_write": false, 00:16:44.026 "abort": true, 00:16:44.026 "nvme_admin": false, 00:16:44.026 "nvme_io": false 00:16:44.026 }, 00:16:44.026 "memory_domains": [ 00:16:44.026 { 00:16:44.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.026 "dma_device_type": 2 00:16:44.026 } 00:16:44.026 ], 00:16:44.026 "driver_specific": {} 00:16:44.026 } 00:16:44.026 ] 00:16:44.026 22:24:43 -- common/autotest_common.sh@895 -- # return 0 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.026 22:24:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.285 22:24:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.285 "name": "Existed_Raid", 00:16:44.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.285 "strip_size_kb": 64, 00:16:44.285 "state": "configuring", 00:16:44.285 "raid_level": "raid0", 00:16:44.285 "superblock": false, 00:16:44.285 "num_base_bdevs": 3, 00:16:44.285 "num_base_bdevs_discovered": 2, 00:16:44.285 "num_base_bdevs_operational": 3, 00:16:44.285 "base_bdevs_list": [ 00:16:44.285 { 00:16:44.285 "name": "BaseBdev1", 00:16:44.285 "uuid": "47ece943-270c-4df7-98c1-0bc963e66f80", 00:16:44.285 "is_configured": true, 00:16:44.285 "data_offset": 0, 00:16:44.285 "data_size": 65536 00:16:44.285 }, 00:16:44.285 { 00:16:44.285 "name": "BaseBdev2", 00:16:44.285 "uuid": "4638cf59-2c9c-4284-b27d-501e783e55c4", 00:16:44.285 "is_configured": true, 00:16:44.285 "data_offset": 0, 00:16:44.285 "data_size": 65536 00:16:44.285 }, 00:16:44.285 { 00:16:44.285 "name": "BaseBdev3", 00:16:44.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.285 "is_configured": false, 00:16:44.285 "data_offset": 0, 00:16:44.285 "data_size": 0 00:16:44.285 } 00:16:44.285 ] 00:16:44.285 }' 00:16:44.285 22:24:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.285 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:16:45.220 22:24:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:45.220 [2024-07-14 22:24:44.214719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.220 [2024-07-14 22:24:44.214778] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:45.220 [2024-07-14 22:24:44.214789] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:45.220 [2024-07-14 22:24:44.214919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:45.220 [2024-07-14 22:24:44.215268] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:45.220 [2024-07-14 22:24:44.215291] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:45.220 [2024-07-14 22:24:44.215527] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.220 BaseBdev3 00:16:45.220 22:24:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:45.220 22:24:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:45.220 22:24:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:45.220 22:24:44 -- common/autotest_common.sh@889 -- # local i 00:16:45.220 22:24:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:45.220 22:24:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:45.220 22:24:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.478 22:24:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:45.747 [ 00:16:45.747 { 00:16:45.747 "name": "BaseBdev3", 00:16:45.747 "aliases": [ 00:16:45.747 "dea8a33e-2dd3-4c22-a965-a5113d790c39" 00:16:45.747 ], 00:16:45.747 "product_name": "Malloc disk", 00:16:45.747 "block_size": 512, 00:16:45.747 "num_blocks": 65536, 00:16:45.747 "uuid": "dea8a33e-2dd3-4c22-a965-a5113d790c39", 00:16:45.747 "assigned_rate_limits": { 00:16:45.747 "rw_ios_per_sec": 0, 00:16:45.747 "rw_mbytes_per_sec": 0, 00:16:45.747 "r_mbytes_per_sec": 0, 00:16:45.747 "w_mbytes_per_sec": 0 00:16:45.747 }, 00:16:45.747 "claimed": true, 00:16:45.747 "claim_type": "exclusive_write", 00:16:45.747 "zoned": false, 00:16:45.747 "supported_io_types": { 00:16:45.747 "read": true, 00:16:45.747 "write": true, 00:16:45.747 "unmap": true, 00:16:45.747 "write_zeroes": true, 00:16:45.747 "flush": true, 00:16:45.747 "reset": true, 00:16:45.747 "compare": false, 00:16:45.747 "compare_and_write": false, 00:16:45.747 "abort": true, 00:16:45.747 "nvme_admin": false, 00:16:45.747 "nvme_io": false 00:16:45.747 }, 00:16:45.747 "memory_domains": [ 00:16:45.747 { 00:16:45.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.747 "dma_device_type": 2 00:16:45.747 } 00:16:45.747 ], 00:16:45.747 "driver_specific": {} 00:16:45.747 } 00:16:45.747 ] 00:16:45.747 22:24:44 -- common/autotest_common.sh@895 -- # return 0 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.747 22:24:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.748 22:24:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.019 22:24:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.019 "name": "Existed_Raid", 00:16:46.019 "uuid": "009425a7-b598-48c2-b20e-51902e11b660", 00:16:46.019 "strip_size_kb": 64, 00:16:46.019 "state": "online", 00:16:46.019 "raid_level": "raid0", 00:16:46.019 "superblock": false, 00:16:46.019 "num_base_bdevs": 3, 00:16:46.019 "num_base_bdevs_discovered": 3, 00:16:46.019 "num_base_bdevs_operational": 3, 00:16:46.019 "base_bdevs_list": [ 00:16:46.019 { 00:16:46.019 "name": "BaseBdev1", 00:16:46.019 "uuid": "47ece943-270c-4df7-98c1-0bc963e66f80", 00:16:46.019 "is_configured": true, 00:16:46.019 "data_offset": 0, 00:16:46.019 "data_size": 65536 00:16:46.019 }, 00:16:46.019 { 00:16:46.019 "name": "BaseBdev2", 00:16:46.019 "uuid": "4638cf59-2c9c-4284-b27d-501e783e55c4", 00:16:46.019 "is_configured": true, 00:16:46.019 "data_offset": 0, 00:16:46.019 "data_size": 65536 00:16:46.019 }, 00:16:46.019 { 00:16:46.019 "name": "BaseBdev3", 00:16:46.019 "uuid": "dea8a33e-2dd3-4c22-a965-a5113d790c39", 00:16:46.019 "is_configured": true, 00:16:46.019 "data_offset": 0, 00:16:46.019 "data_size": 65536 00:16:46.019 } 00:16:46.019 ] 00:16:46.019 }' 00:16:46.020 22:24:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.020 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.585 22:24:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:46.842 [2024-07-14 22:24:45.687050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.842 [2024-07-14 22:24:45.687080] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.842 [2024-07-14 22:24:45.687147] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:46.842 22:24:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:46.843 22:24:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.843 22:24:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.843 22:24:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.843 22:24:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.843 22:24:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.843 22:24:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.100 22:24:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.100 "name": "Existed_Raid", 00:16:47.100 "uuid": "009425a7-b598-48c2-b20e-51902e11b660", 00:16:47.100 "strip_size_kb": 64, 00:16:47.100 "state": "offline", 00:16:47.100 "raid_level": "raid0", 00:16:47.100 "superblock": false, 00:16:47.100 "num_base_bdevs": 3, 00:16:47.100 "num_base_bdevs_discovered": 2, 00:16:47.100 "num_base_bdevs_operational": 2, 00:16:47.100 "base_bdevs_list": [ 00:16:47.100 { 00:16:47.100 "name": null, 00:16:47.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.100 "is_configured": false, 00:16:47.100 "data_offset": 0, 00:16:47.100 "data_size": 65536 00:16:47.100 }, 00:16:47.100 { 00:16:47.100 "name": "BaseBdev2", 00:16:47.100 "uuid": "4638cf59-2c9c-4284-b27d-501e783e55c4", 00:16:47.100 "is_configured": true, 00:16:47.100 "data_offset": 0, 00:16:47.100 "data_size": 65536 00:16:47.100 }, 00:16:47.100 { 00:16:47.100 "name": "BaseBdev3", 00:16:47.100 "uuid": "dea8a33e-2dd3-4c22-a965-a5113d790c39", 00:16:47.100 "is_configured": true, 00:16:47.100 "data_offset": 0, 00:16:47.100 "data_size": 65536 00:16:47.100 } 00:16:47.100 ] 00:16:47.100 }' 00:16:47.100 22:24:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.100 22:24:46 -- common/autotest_common.sh@10 -- # set +x 00:16:47.668 22:24:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:47.668 22:24:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.668 22:24:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.668 22:24:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:47.927 22:24:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:47.927 22:24:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.927 22:24:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:48.186 [2024-07-14 22:24:47.146733] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.186 22:24:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.186 22:24:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.186 22:24:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.186 22:24:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:48.444 22:24:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:48.444 22:24:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.444 22:24:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:48.703 [2024-07-14 22:24:47.697636] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.703 [2024-07-14 22:24:47.697698] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:48.961 22:24:47 -- bdev/bdev_raid.sh@287 -- # killprocess 117683 00:16:48.961 22:24:47 -- common/autotest_common.sh@926 -- # '[' -z 117683 ']' 00:16:48.961 22:24:47 -- common/autotest_common.sh@930 -- # kill -0 117683 00:16:48.961 22:24:47 -- common/autotest_common.sh@931 -- # uname 00:16:48.961 22:24:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.961 22:24:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117683 00:16:48.961 killing process with pid 117683 00:16:48.961 22:24:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:48.961 22:24:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:48.961 22:24:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117683' 00:16:48.961 22:24:47 -- common/autotest_common.sh@945 -- # kill 117683 00:16:48.961 22:24:47 -- common/autotest_common.sh@950 -- # wait 117683 00:16:48.961 [2024-07-14 22:24:47.990676] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.961 [2024-07-14 22:24:47.990807] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.897 ************************************ 00:16:49.897 END TEST raid_state_function_test 00:16:49.897 ************************************ 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:49.897 00:16:49.897 real 0m12.103s 00:16:49.897 user 0m21.624s 00:16:49.897 sys 0m1.389s 00:16:49.897 22:24:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.897 22:24:48 -- common/autotest_common.sh@10 -- # set +x 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:49.897 22:24:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:49.897 22:24:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:49.897 22:24:48 -- common/autotest_common.sh@10 -- # set +x 00:16:49.897 ************************************ 00:16:49.897 START TEST raid_state_function_test_sb 00:16:49.897 ************************************ 00:16:49.897 22:24:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:49.897 22:24:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=118083 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118083' 00:16:50.156 Process raid pid: 118083 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118083 /var/tmp/spdk-raid.sock 00:16:50.156 22:24:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:50.156 22:24:48 -- common/autotest_common.sh@819 -- # '[' -z 118083 ']' 00:16:50.156 22:24:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.156 22:24:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:50.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.156 22:24:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.156 22:24:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:50.156 22:24:48 -- common/autotest_common.sh@10 -- # set +x 00:16:50.156 [2024-07-14 22:24:49.053123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:50.156 [2024-07-14 22:24:49.053446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.415 [2024-07-14 22:24:49.248266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.415 [2024-07-14 22:24:49.400859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.673 [2024-07-14 22:24:49.566379] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.931 22:24:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:50.932 22:24:49 -- common/autotest_common.sh@852 -- # return 0 00:16:50.932 22:24:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.190 [2024-07-14 22:24:50.103423] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.190 [2024-07-14 22:24:50.103502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.190 [2024-07-14 22:24:50.103516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.190 [2024-07-14 22:24:50.103538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.190 [2024-07-14 22:24:50.103546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.190 [2024-07-14 22:24:50.103586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.190 22:24:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.449 22:24:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.449 "name": "Existed_Raid", 00:16:51.449 "uuid": "7344b4c0-8702-43b9-8895-fea624e0aec7", 00:16:51.449 "strip_size_kb": 64, 00:16:51.449 "state": "configuring", 00:16:51.449 "raid_level": "raid0", 00:16:51.449 "superblock": true, 00:16:51.449 "num_base_bdevs": 3, 00:16:51.449 "num_base_bdevs_discovered": 0, 00:16:51.449 "num_base_bdevs_operational": 3, 00:16:51.449 "base_bdevs_list": [ 00:16:51.449 { 00:16:51.449 "name": "BaseBdev1", 00:16:51.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.449 "is_configured": false, 00:16:51.449 "data_offset": 0, 00:16:51.449 "data_size": 0 00:16:51.449 }, 00:16:51.449 { 00:16:51.449 "name": "BaseBdev2", 00:16:51.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.449 "is_configured": false, 00:16:51.449 "data_offset": 0, 00:16:51.449 "data_size": 0 00:16:51.449 }, 00:16:51.449 { 00:16:51.449 "name": "BaseBdev3", 00:16:51.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.449 "is_configured": false, 00:16:51.449 "data_offset": 0, 00:16:51.449 "data_size": 0 00:16:51.449 } 00:16:51.449 ] 00:16:51.449 }' 00:16:51.449 22:24:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.449 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:16:52.017 22:24:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.275 [2024-07-14 22:24:51.211481] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.275 [2024-07-14 22:24:51.211514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:52.275 22:24:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:52.534 [2024-07-14 22:24:51.387571] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.534 [2024-07-14 22:24:51.387629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.534 [2024-07-14 22:24:51.387659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.534 [2024-07-14 22:24:51.387697] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.534 [2024-07-14 22:24:51.387705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.534 [2024-07-14 22:24:51.387738] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.534 22:24:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.791 [2024-07-14 22:24:51.681081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.791 BaseBdev1 00:16:52.791 22:24:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:52.791 22:24:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:52.791 22:24:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:52.791 22:24:51 -- common/autotest_common.sh@889 -- # local i 00:16:52.791 22:24:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:52.791 22:24:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:52.791 22:24:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.049 22:24:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.049 [ 00:16:53.049 { 00:16:53.049 "name": "BaseBdev1", 00:16:53.049 "aliases": [ 00:16:53.049 "8fc0fe5a-e266-4b1f-80cb-e660ac7274f3" 00:16:53.049 ], 00:16:53.049 "product_name": "Malloc disk", 00:16:53.049 "block_size": 512, 00:16:53.049 "num_blocks": 65536, 00:16:53.049 "uuid": "8fc0fe5a-e266-4b1f-80cb-e660ac7274f3", 00:16:53.049 "assigned_rate_limits": { 00:16:53.049 "rw_ios_per_sec": 0, 00:16:53.049 "rw_mbytes_per_sec": 0, 00:16:53.049 "r_mbytes_per_sec": 0, 00:16:53.049 "w_mbytes_per_sec": 0 00:16:53.049 }, 00:16:53.049 "claimed": true, 00:16:53.049 "claim_type": "exclusive_write", 00:16:53.049 "zoned": false, 00:16:53.049 "supported_io_types": { 00:16:53.049 "read": true, 00:16:53.049 "write": true, 00:16:53.049 "unmap": true, 00:16:53.049 "write_zeroes": true, 00:16:53.049 "flush": true, 00:16:53.049 "reset": true, 00:16:53.049 "compare": false, 00:16:53.049 "compare_and_write": false, 00:16:53.049 "abort": true, 00:16:53.049 "nvme_admin": false, 00:16:53.049 "nvme_io": false 00:16:53.049 }, 00:16:53.049 "memory_domains": [ 00:16:53.049 { 00:16:53.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.049 "dma_device_type": 2 00:16:53.049 } 00:16:53.049 ], 00:16:53.049 "driver_specific": {} 00:16:53.049 } 00:16:53.049 ] 00:16:53.307 22:24:52 -- common/autotest_common.sh@895 -- # return 0 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.307 "name": "Existed_Raid", 00:16:53.307 "uuid": "9f4de60e-09b9-4e66-8f35-197dee75f581", 00:16:53.307 "strip_size_kb": 64, 00:16:53.307 "state": "configuring", 00:16:53.307 "raid_level": "raid0", 00:16:53.307 "superblock": true, 00:16:53.307 "num_base_bdevs": 3, 00:16:53.307 "num_base_bdevs_discovered": 1, 00:16:53.307 "num_base_bdevs_operational": 3, 00:16:53.307 "base_bdevs_list": [ 00:16:53.307 { 00:16:53.307 "name": "BaseBdev1", 00:16:53.307 "uuid": "8fc0fe5a-e266-4b1f-80cb-e660ac7274f3", 00:16:53.307 "is_configured": true, 00:16:53.307 "data_offset": 2048, 00:16:53.307 "data_size": 63488 00:16:53.307 }, 00:16:53.307 { 00:16:53.307 "name": "BaseBdev2", 00:16:53.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.307 "is_configured": false, 00:16:53.307 "data_offset": 0, 00:16:53.307 "data_size": 0 00:16:53.307 }, 00:16:53.307 { 00:16:53.307 "name": "BaseBdev3", 00:16:53.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.307 "is_configured": false, 00:16:53.307 "data_offset": 0, 00:16:53.307 "data_size": 0 00:16:53.307 } 00:16:53.307 ] 00:16:53.307 }' 00:16:53.307 22:24:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.307 22:24:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.241 22:24:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.241 [2024-07-14 22:24:53.169372] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.241 [2024-07-14 22:24:53.169428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:54.241 22:24:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:54.241 22:24:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:54.499 22:24:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.757 BaseBdev1 00:16:54.757 22:24:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:54.757 22:24:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:54.757 22:24:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:54.757 22:24:53 -- common/autotest_common.sh@889 -- # local i 00:16:54.757 22:24:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:54.757 22:24:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:54.757 22:24:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.015 22:24:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.272 [ 00:16:55.272 { 00:16:55.272 "name": "BaseBdev1", 00:16:55.272 "aliases": [ 00:16:55.272 "bdfa6d28-886a-4e78-b15d-403ff276d6df" 00:16:55.272 ], 00:16:55.272 "product_name": "Malloc disk", 00:16:55.272 "block_size": 512, 00:16:55.272 "num_blocks": 65536, 00:16:55.273 "uuid": "bdfa6d28-886a-4e78-b15d-403ff276d6df", 00:16:55.273 "assigned_rate_limits": { 00:16:55.273 "rw_ios_per_sec": 0, 00:16:55.273 "rw_mbytes_per_sec": 0, 00:16:55.273 "r_mbytes_per_sec": 0, 00:16:55.273 "w_mbytes_per_sec": 0 00:16:55.273 }, 00:16:55.273 "claimed": false, 00:16:55.273 "zoned": false, 00:16:55.273 "supported_io_types": { 00:16:55.273 "read": true, 00:16:55.273 "write": true, 00:16:55.273 "unmap": true, 00:16:55.273 "write_zeroes": true, 00:16:55.273 "flush": true, 00:16:55.273 "reset": true, 00:16:55.273 "compare": false, 00:16:55.273 "compare_and_write": false, 00:16:55.273 "abort": true, 00:16:55.273 "nvme_admin": false, 00:16:55.273 "nvme_io": false 00:16:55.273 }, 00:16:55.273 "memory_domains": [ 00:16:55.273 { 00:16:55.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.273 "dma_device_type": 2 00:16:55.273 } 00:16:55.273 ], 00:16:55.273 "driver_specific": {} 00:16:55.273 } 00:16:55.273 ] 00:16:55.273 22:24:54 -- common/autotest_common.sh@895 -- # return 0 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.273 [2024-07-14 22:24:54.295940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.273 [2024-07-14 22:24:54.298076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.273 [2024-07-14 22:24:54.298149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.273 [2024-07-14 22:24:54.298162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.273 [2024-07-14 22:24:54.298187] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.273 22:24:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.530 22:24:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.530 "name": "Existed_Raid", 00:16:55.530 "uuid": "3bcd02d4-906e-4f54-ae9d-773b5e5d4153", 00:16:55.530 "strip_size_kb": 64, 00:16:55.530 "state": "configuring", 00:16:55.530 "raid_level": "raid0", 00:16:55.530 "superblock": true, 00:16:55.530 "num_base_bdevs": 3, 00:16:55.530 "num_base_bdevs_discovered": 1, 00:16:55.530 "num_base_bdevs_operational": 3, 00:16:55.530 "base_bdevs_list": [ 00:16:55.530 { 00:16:55.530 "name": "BaseBdev1", 00:16:55.530 "uuid": "bdfa6d28-886a-4e78-b15d-403ff276d6df", 00:16:55.530 "is_configured": true, 00:16:55.530 "data_offset": 2048, 00:16:55.530 "data_size": 63488 00:16:55.530 }, 00:16:55.530 { 00:16:55.530 "name": "BaseBdev2", 00:16:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.530 "is_configured": false, 00:16:55.530 "data_offset": 0, 00:16:55.530 "data_size": 0 00:16:55.530 }, 00:16:55.530 { 00:16:55.530 "name": "BaseBdev3", 00:16:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.530 "is_configured": false, 00:16:55.530 "data_offset": 0, 00:16:55.530 "data_size": 0 00:16:55.530 } 00:16:55.530 ] 00:16:55.530 }' 00:16:55.530 22:24:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.530 22:24:54 -- common/autotest_common.sh@10 -- # set +x 00:16:56.464 22:24:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.464 [2024-07-14 22:24:55.482087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.464 BaseBdev2 00:16:56.464 22:24:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:56.464 22:24:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:56.464 22:24:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:56.464 22:24:55 -- common/autotest_common.sh@889 -- # local i 00:16:56.464 22:24:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:56.464 22:24:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:56.464 22:24:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.722 22:24:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.980 [ 00:16:56.980 { 00:16:56.980 "name": "BaseBdev2", 00:16:56.980 "aliases": [ 00:16:56.980 "2dbf16f6-b68e-4378-bc80-696121630ba1" 00:16:56.980 ], 00:16:56.980 "product_name": "Malloc disk", 00:16:56.980 "block_size": 512, 00:16:56.980 "num_blocks": 65536, 00:16:56.980 "uuid": "2dbf16f6-b68e-4378-bc80-696121630ba1", 00:16:56.980 "assigned_rate_limits": { 00:16:56.980 "rw_ios_per_sec": 0, 00:16:56.980 "rw_mbytes_per_sec": 0, 00:16:56.980 "r_mbytes_per_sec": 0, 00:16:56.980 "w_mbytes_per_sec": 0 00:16:56.980 }, 00:16:56.980 "claimed": true, 00:16:56.980 "claim_type": "exclusive_write", 00:16:56.980 "zoned": false, 00:16:56.980 "supported_io_types": { 00:16:56.980 "read": true, 00:16:56.980 "write": true, 00:16:56.980 "unmap": true, 00:16:56.980 "write_zeroes": true, 00:16:56.980 "flush": true, 00:16:56.980 "reset": true, 00:16:56.980 "compare": false, 00:16:56.980 "compare_and_write": false, 00:16:56.980 "abort": true, 00:16:56.980 "nvme_admin": false, 00:16:56.980 "nvme_io": false 00:16:56.980 }, 00:16:56.980 "memory_domains": [ 00:16:56.980 { 00:16:56.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.980 "dma_device_type": 2 00:16:56.980 } 00:16:56.980 ], 00:16:56.980 "driver_specific": {} 00:16:56.980 } 00:16:56.980 ] 00:16:56.980 22:24:55 -- common/autotest_common.sh@895 -- # return 0 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.980 22:24:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.237 22:24:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.237 "name": "Existed_Raid", 00:16:57.237 "uuid": "3bcd02d4-906e-4f54-ae9d-773b5e5d4153", 00:16:57.237 "strip_size_kb": 64, 00:16:57.237 "state": "configuring", 00:16:57.237 "raid_level": "raid0", 00:16:57.237 "superblock": true, 00:16:57.237 "num_base_bdevs": 3, 00:16:57.237 "num_base_bdevs_discovered": 2, 00:16:57.237 "num_base_bdevs_operational": 3, 00:16:57.237 "base_bdevs_list": [ 00:16:57.237 { 00:16:57.237 "name": "BaseBdev1", 00:16:57.237 "uuid": "bdfa6d28-886a-4e78-b15d-403ff276d6df", 00:16:57.237 "is_configured": true, 00:16:57.237 "data_offset": 2048, 00:16:57.237 "data_size": 63488 00:16:57.237 }, 00:16:57.237 { 00:16:57.237 "name": "BaseBdev2", 00:16:57.237 "uuid": "2dbf16f6-b68e-4378-bc80-696121630ba1", 00:16:57.237 "is_configured": true, 00:16:57.237 "data_offset": 2048, 00:16:57.237 "data_size": 63488 00:16:57.237 }, 00:16:57.237 { 00:16:57.237 "name": "BaseBdev3", 00:16:57.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.237 "is_configured": false, 00:16:57.237 "data_offset": 0, 00:16:57.237 "data_size": 0 00:16:57.237 } 00:16:57.237 ] 00:16:57.237 }' 00:16:57.237 22:24:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.237 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:16:57.815 22:24:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.085 [2024-07-14 22:24:57.081948] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.085 BaseBdev3 00:16:58.085 [2024-07-14 22:24:57.082180] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:58.085 [2024-07-14 22:24:57.082196] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.085 [2024-07-14 22:24:57.082340] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:58.085 [2024-07-14 22:24:57.082739] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:58.085 [2024-07-14 22:24:57.082755] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:58.085 [2024-07-14 22:24:57.082886] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.085 22:24:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:58.085 22:24:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:58.085 22:24:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:58.085 22:24:57 -- common/autotest_common.sh@889 -- # local i 00:16:58.085 22:24:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:58.085 22:24:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:58.085 22:24:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.343 22:24:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.601 [ 00:16:58.601 { 00:16:58.601 "name": "BaseBdev3", 00:16:58.601 "aliases": [ 00:16:58.601 "66dc1054-09e5-437d-a6cd-e2dfcc4de1fd" 00:16:58.601 ], 00:16:58.601 "product_name": "Malloc disk", 00:16:58.601 "block_size": 512, 00:16:58.601 "num_blocks": 65536, 00:16:58.601 "uuid": "66dc1054-09e5-437d-a6cd-e2dfcc4de1fd", 00:16:58.601 "assigned_rate_limits": { 00:16:58.601 "rw_ios_per_sec": 0, 00:16:58.601 "rw_mbytes_per_sec": 0, 00:16:58.601 "r_mbytes_per_sec": 0, 00:16:58.601 "w_mbytes_per_sec": 0 00:16:58.601 }, 00:16:58.601 "claimed": true, 00:16:58.601 "claim_type": "exclusive_write", 00:16:58.601 "zoned": false, 00:16:58.601 "supported_io_types": { 00:16:58.601 "read": true, 00:16:58.601 "write": true, 00:16:58.601 "unmap": true, 00:16:58.601 "write_zeroes": true, 00:16:58.601 "flush": true, 00:16:58.601 "reset": true, 00:16:58.601 "compare": false, 00:16:58.601 "compare_and_write": false, 00:16:58.601 "abort": true, 00:16:58.601 "nvme_admin": false, 00:16:58.601 "nvme_io": false 00:16:58.601 }, 00:16:58.601 "memory_domains": [ 00:16:58.601 { 00:16:58.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.601 "dma_device_type": 2 00:16:58.601 } 00:16:58.601 ], 00:16:58.601 "driver_specific": {} 00:16:58.601 } 00:16:58.601 ] 00:16:58.601 22:24:57 -- common/autotest_common.sh@895 -- # return 0 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.601 22:24:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.859 22:24:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.859 "name": "Existed_Raid", 00:16:58.859 "uuid": "3bcd02d4-906e-4f54-ae9d-773b5e5d4153", 00:16:58.859 "strip_size_kb": 64, 00:16:58.859 "state": "online", 00:16:58.859 "raid_level": "raid0", 00:16:58.859 "superblock": true, 00:16:58.859 "num_base_bdevs": 3, 00:16:58.859 "num_base_bdevs_discovered": 3, 00:16:58.859 "num_base_bdevs_operational": 3, 00:16:58.859 "base_bdevs_list": [ 00:16:58.859 { 00:16:58.859 "name": "BaseBdev1", 00:16:58.859 "uuid": "bdfa6d28-886a-4e78-b15d-403ff276d6df", 00:16:58.859 "is_configured": true, 00:16:58.859 "data_offset": 2048, 00:16:58.859 "data_size": 63488 00:16:58.859 }, 00:16:58.859 { 00:16:58.859 "name": "BaseBdev2", 00:16:58.859 "uuid": "2dbf16f6-b68e-4378-bc80-696121630ba1", 00:16:58.859 "is_configured": true, 00:16:58.859 "data_offset": 2048, 00:16:58.859 "data_size": 63488 00:16:58.859 }, 00:16:58.859 { 00:16:58.860 "name": "BaseBdev3", 00:16:58.860 "uuid": "66dc1054-09e5-437d-a6cd-e2dfcc4de1fd", 00:16:58.860 "is_configured": true, 00:16:58.860 "data_offset": 2048, 00:16:58.860 "data_size": 63488 00:16:58.860 } 00:16:58.860 ] 00:16:58.860 }' 00:16:58.860 22:24:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.860 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:16:59.426 22:24:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:59.684 [2024-07-14 22:24:58.574292] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.684 [2024-07-14 22:24:58.574320] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.684 [2024-07-14 22:24:58.574376] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.684 22:24:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:59.684 22:24:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:59.684 22:24:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:59.684 22:24:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:59.684 22:24:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.685 22:24:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.943 22:24:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.943 "name": "Existed_Raid", 00:16:59.943 "uuid": "3bcd02d4-906e-4f54-ae9d-773b5e5d4153", 00:16:59.943 "strip_size_kb": 64, 00:16:59.943 "state": "offline", 00:16:59.943 "raid_level": "raid0", 00:16:59.943 "superblock": true, 00:16:59.943 "num_base_bdevs": 3, 00:16:59.943 "num_base_bdevs_discovered": 2, 00:16:59.943 "num_base_bdevs_operational": 2, 00:16:59.943 "base_bdevs_list": [ 00:16:59.943 { 00:16:59.943 "name": null, 00:16:59.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.943 "is_configured": false, 00:16:59.943 "data_offset": 2048, 00:16:59.943 "data_size": 63488 00:16:59.943 }, 00:16:59.943 { 00:16:59.943 "name": "BaseBdev2", 00:16:59.943 "uuid": "2dbf16f6-b68e-4378-bc80-696121630ba1", 00:16:59.943 "is_configured": true, 00:16:59.943 "data_offset": 2048, 00:16:59.943 "data_size": 63488 00:16:59.943 }, 00:16:59.943 { 00:16:59.943 "name": "BaseBdev3", 00:16:59.943 "uuid": "66dc1054-09e5-437d-a6cd-e2dfcc4de1fd", 00:16:59.943 "is_configured": true, 00:16:59.943 "data_offset": 2048, 00:16:59.943 "data_size": 63488 00:16:59.943 } 00:16:59.943 ] 00:16:59.943 }' 00:16:59.943 22:24:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.943 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:17:00.508 22:24:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:00.508 22:24:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.508 22:24:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.508 22:24:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.765 22:24:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.765 22:24:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.765 22:24:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:01.023 [2024-07-14 22:25:00.021063] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:01.281 22:25:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:01.539 [2024-07-14 22:25:00.516265] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:01.539 [2024-07-14 22:25:00.516359] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:01.797 22:25:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:01.797 22:25:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:01.797 22:25:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.797 22:25:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:02.055 22:25:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:02.055 22:25:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:02.055 22:25:00 -- bdev/bdev_raid.sh@287 -- # killprocess 118083 00:17:02.055 22:25:00 -- common/autotest_common.sh@926 -- # '[' -z 118083 ']' 00:17:02.055 22:25:00 -- common/autotest_common.sh@930 -- # kill -0 118083 00:17:02.055 22:25:00 -- common/autotest_common.sh@931 -- # uname 00:17:02.055 22:25:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.055 22:25:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118083 00:17:02.055 killing process with pid 118083 00:17:02.055 22:25:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:02.055 22:25:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:02.055 22:25:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118083' 00:17:02.055 22:25:00 -- common/autotest_common.sh@945 -- # kill 118083 00:17:02.055 22:25:00 -- common/autotest_common.sh@950 -- # wait 118083 00:17:02.055 [2024-07-14 22:25:00.882423] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.055 [2024-07-14 22:25:00.882570] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.991 ************************************ 00:17:02.991 END TEST raid_state_function_test_sb 00:17:02.991 ************************************ 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:02.991 00:17:02.991 real 0m12.871s 00:17:02.991 user 0m22.883s 00:17:02.991 sys 0m1.513s 00:17:02.991 22:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.991 22:25:01 -- common/autotest_common.sh@10 -- # set +x 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:02.991 22:25:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:02.991 22:25:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:02.991 22:25:01 -- common/autotest_common.sh@10 -- # set +x 00:17:02.991 ************************************ 00:17:02.991 START TEST raid_superblock_test 00:17:02.991 ************************************ 00:17:02.991 22:25:01 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@357 -- # raid_pid=118494 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118494 /var/tmp/spdk-raid.sock 00:17:02.991 22:25:01 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:02.991 22:25:01 -- common/autotest_common.sh@819 -- # '[' -z 118494 ']' 00:17:02.991 22:25:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.991 22:25:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:02.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.991 22:25:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.991 22:25:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:02.991 22:25:01 -- common/autotest_common.sh@10 -- # set +x 00:17:02.991 [2024-07-14 22:25:01.949931] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:02.991 [2024-07-14 22:25:01.950115] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118494 ] 00:17:03.250 [2024-07-14 22:25:02.119252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.508 [2024-07-14 22:25:02.339972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.508 [2024-07-14 22:25:02.501457] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.075 22:25:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.075 22:25:02 -- common/autotest_common.sh@852 -- # return 0 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:04.075 22:25:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:04.075 malloc1 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:04.334 [2024-07-14 22:25:03.318056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:04.334 [2024-07-14 22:25:03.318153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.334 [2024-07-14 22:25:03.318190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:04.334 [2024-07-14 22:25:03.318234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.334 [2024-07-14 22:25:03.320275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.334 [2024-07-14 22:25:03.320336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:04.334 pt1 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:04.334 22:25:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:04.592 malloc2 00:17:04.592 22:25:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.850 [2024-07-14 22:25:03.782327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.850 [2024-07-14 22:25:03.782429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.850 [2024-07-14 22:25:03.782471] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:04.850 [2024-07-14 22:25:03.782520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.850 [2024-07-14 22:25:03.784744] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.850 [2024-07-14 22:25:03.784805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.850 pt2 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:04.850 22:25:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:05.108 malloc3 00:17:05.108 22:25:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.365 [2024-07-14 22:25:04.224674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.365 [2024-07-14 22:25:04.224749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.365 [2024-07-14 22:25:04.224787] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:05.365 [2024-07-14 22:25:04.224828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.365 [2024-07-14 22:25:04.227209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.365 [2024-07-14 22:25:04.227257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.365 pt3 00:17:05.365 22:25:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:05.365 22:25:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:05.365 22:25:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:05.623 [2024-07-14 22:25:04.428756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.623 [2024-07-14 22:25:04.430988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.623 [2024-07-14 22:25:04.431082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.623 [2024-07-14 22:25:04.431282] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:05.623 [2024-07-14 22:25:04.431305] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:05.623 [2024-07-14 22:25:04.431459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:05.623 [2024-07-14 22:25:04.431913] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:05.623 [2024-07-14 22:25:04.431937] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:05.623 [2024-07-14 22:25:04.432073] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.623 22:25:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.880 22:25:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.880 "name": "raid_bdev1", 00:17:05.880 "uuid": "e13045fe-8ba5-487d-9bdd-07a91792e7f3", 00:17:05.880 "strip_size_kb": 64, 00:17:05.880 "state": "online", 00:17:05.880 "raid_level": "raid0", 00:17:05.880 "superblock": true, 00:17:05.880 "num_base_bdevs": 3, 00:17:05.880 "num_base_bdevs_discovered": 3, 00:17:05.880 "num_base_bdevs_operational": 3, 00:17:05.880 "base_bdevs_list": [ 00:17:05.880 { 00:17:05.880 "name": "pt1", 00:17:05.880 "uuid": "8c120cb3-6b9c-55f9-8851-b7830da6745e", 00:17:05.880 "is_configured": true, 00:17:05.880 "data_offset": 2048, 00:17:05.880 "data_size": 63488 00:17:05.880 }, 00:17:05.880 { 00:17:05.880 "name": "pt2", 00:17:05.880 "uuid": "c6d49b11-e8ff-5a59-93e0-b0e1a233a29a", 00:17:05.880 "is_configured": true, 00:17:05.880 "data_offset": 2048, 00:17:05.880 "data_size": 63488 00:17:05.880 }, 00:17:05.880 { 00:17:05.880 "name": "pt3", 00:17:05.880 "uuid": "aaee1937-4084-5c3c-88f5-d5ca86e655bf", 00:17:05.880 "is_configured": true, 00:17:05.880 "data_offset": 2048, 00:17:05.880 "data_size": 63488 00:17:05.880 } 00:17:05.880 ] 00:17:05.880 }' 00:17:05.880 22:25:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.880 22:25:04 -- common/autotest_common.sh@10 -- # set +x 00:17:06.445 22:25:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:06.445 22:25:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:06.703 [2024-07-14 22:25:05.621096] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.703 22:25:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e13045fe-8ba5-487d-9bdd-07a91792e7f3 00:17:06.703 22:25:05 -- bdev/bdev_raid.sh@380 -- # '[' -z e13045fe-8ba5-487d-9bdd-07a91792e7f3 ']' 00:17:06.703 22:25:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:06.959 [2024-07-14 22:25:05.868942] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.959 [2024-07-14 22:25:05.868965] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.959 [2024-07-14 22:25:05.869027] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.959 [2024-07-14 22:25:05.869082] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.959 [2024-07-14 22:25:05.869091] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:06.959 22:25:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.959 22:25:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:07.217 22:25:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:07.217 22:25:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:07.217 22:25:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.217 22:25:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:07.474 22:25:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.475 22:25:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:07.733 22:25:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.733 22:25:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:07.733 22:25:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:07.733 22:25:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:07.991 22:25:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:07.991 22:25:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.991 22:25:06 -- common/autotest_common.sh@640 -- # local es=0 00:17:07.991 22:25:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.991 22:25:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.991 22:25:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.991 22:25:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.991 22:25:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.991 22:25:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.991 22:25:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.991 22:25:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.991 22:25:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:07.991 22:25:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:08.249 [2024-07-14 22:25:07.141149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:08.249 [2024-07-14 22:25:07.142988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:08.249 [2024-07-14 22:25:07.143041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:08.249 [2024-07-14 22:25:07.143086] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:08.249 [2024-07-14 22:25:07.143166] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:08.249 [2024-07-14 22:25:07.143199] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:08.249 [2024-07-14 22:25:07.143275] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.249 [2024-07-14 22:25:07.143287] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:08.249 request: 00:17:08.249 { 00:17:08.249 "name": "raid_bdev1", 00:17:08.249 "raid_level": "raid0", 00:17:08.249 "base_bdevs": [ 00:17:08.249 "malloc1", 00:17:08.249 "malloc2", 00:17:08.249 "malloc3" 00:17:08.249 ], 00:17:08.249 "superblock": false, 00:17:08.249 "strip_size_kb": 64, 00:17:08.249 "method": "bdev_raid_create", 00:17:08.249 "req_id": 1 00:17:08.249 } 00:17:08.249 Got JSON-RPC error response 00:17:08.249 response: 00:17:08.249 { 00:17:08.249 "code": -17, 00:17:08.249 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:08.249 } 00:17:08.249 22:25:07 -- common/autotest_common.sh@643 -- # es=1 00:17:08.249 22:25:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:08.249 22:25:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:08.249 22:25:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:08.249 22:25:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.249 22:25:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.508 [2024-07-14 22:25:07.541161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.508 [2024-07-14 22:25:07.541248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.508 [2024-07-14 22:25:07.541283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:08.508 [2024-07-14 22:25:07.541306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.508 [2024-07-14 22:25:07.543566] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.508 [2024-07-14 22:25:07.543628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.508 [2024-07-14 22:25:07.543755] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:08.508 [2024-07-14 22:25:07.543844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.508 pt1 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.508 22:25:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.766 22:25:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.766 "name": "raid_bdev1", 00:17:08.766 "uuid": "e13045fe-8ba5-487d-9bdd-07a91792e7f3", 00:17:08.766 "strip_size_kb": 64, 00:17:08.766 "state": "configuring", 00:17:08.766 "raid_level": "raid0", 00:17:08.766 "superblock": true, 00:17:08.766 "num_base_bdevs": 3, 00:17:08.766 "num_base_bdevs_discovered": 1, 00:17:08.766 "num_base_bdevs_operational": 3, 00:17:08.766 "base_bdevs_list": [ 00:17:08.766 { 00:17:08.766 "name": "pt1", 00:17:08.766 "uuid": "8c120cb3-6b9c-55f9-8851-b7830da6745e", 00:17:08.766 "is_configured": true, 00:17:08.766 "data_offset": 2048, 00:17:08.766 "data_size": 63488 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "name": null, 00:17:08.766 "uuid": "c6d49b11-e8ff-5a59-93e0-b0e1a233a29a", 00:17:08.766 "is_configured": false, 00:17:08.766 "data_offset": 2048, 00:17:08.766 "data_size": 63488 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "name": null, 00:17:08.766 "uuid": "aaee1937-4084-5c3c-88f5-d5ca86e655bf", 00:17:08.766 "is_configured": false, 00:17:08.766 "data_offset": 2048, 00:17:08.766 "data_size": 63488 00:17:08.766 } 00:17:08.766 ] 00:17:08.766 }' 00:17:08.766 22:25:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.766 22:25:07 -- common/autotest_common.sh@10 -- # set +x 00:17:09.714 22:25:08 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:09.714 22:25:08 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.714 [2024-07-14 22:25:08.689419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.714 [2024-07-14 22:25:08.689521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.714 [2024-07-14 22:25:08.689566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:09.714 [2024-07-14 22:25:08.689589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.714 [2024-07-14 22:25:08.690017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.714 [2024-07-14 22:25:08.690053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.714 [2024-07-14 22:25:08.690155] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:09.714 [2024-07-14 22:25:08.690179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.714 pt2 00:17:09.714 22:25:08 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:09.986 [2024-07-14 22:25:08.945482] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.986 22:25:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.245 22:25:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.245 "name": "raid_bdev1", 00:17:10.245 "uuid": "e13045fe-8ba5-487d-9bdd-07a91792e7f3", 00:17:10.245 "strip_size_kb": 64, 00:17:10.245 "state": "configuring", 00:17:10.245 "raid_level": "raid0", 00:17:10.245 "superblock": true, 00:17:10.245 "num_base_bdevs": 3, 00:17:10.245 "num_base_bdevs_discovered": 1, 00:17:10.245 "num_base_bdevs_operational": 3, 00:17:10.245 "base_bdevs_list": [ 00:17:10.245 { 00:17:10.245 "name": "pt1", 00:17:10.245 "uuid": "8c120cb3-6b9c-55f9-8851-b7830da6745e", 00:17:10.245 "is_configured": true, 00:17:10.245 "data_offset": 2048, 00:17:10.245 "data_size": 63488 00:17:10.245 }, 00:17:10.245 { 00:17:10.245 "name": null, 00:17:10.245 "uuid": "c6d49b11-e8ff-5a59-93e0-b0e1a233a29a", 00:17:10.245 "is_configured": false, 00:17:10.245 "data_offset": 2048, 00:17:10.245 "data_size": 63488 00:17:10.245 }, 00:17:10.245 { 00:17:10.245 "name": null, 00:17:10.245 "uuid": "aaee1937-4084-5c3c-88f5-d5ca86e655bf", 00:17:10.245 "is_configured": false, 00:17:10.245 "data_offset": 2048, 00:17:10.245 "data_size": 63488 00:17:10.245 } 00:17:10.245 ] 00:17:10.245 }' 00:17:10.245 22:25:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.245 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:17:10.812 22:25:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:10.812 22:25:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.812 22:25:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.084 [2024-07-14 22:25:10.049680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.084 [2024-07-14 22:25:10.049740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.084 [2024-07-14 22:25:10.049772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:11.084 [2024-07-14 22:25:10.049836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.085 [2024-07-14 22:25:10.050196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.085 [2024-07-14 22:25:10.050228] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.085 [2024-07-14 22:25:10.050313] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:11.085 [2024-07-14 22:25:10.050343] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.085 pt2 00:17:11.085 22:25:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:11.085 22:25:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:11.085 22:25:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.344 [2024-07-14 22:25:10.237719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.344 [2024-07-14 22:25:10.237774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.344 [2024-07-14 22:25:10.237803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:11.344 [2024-07-14 22:25:10.237826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.344 [2024-07-14 22:25:10.238182] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.344 [2024-07-14 22:25:10.238212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.344 [2024-07-14 22:25:10.238298] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:11.344 [2024-07-14 22:25:10.238321] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.344 [2024-07-14 22:25:10.238420] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:11.344 [2024-07-14 22:25:10.238432] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.344 [2024-07-14 22:25:10.238520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:11.344 [2024-07-14 22:25:10.238833] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:11.344 [2024-07-14 22:25:10.238856] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:11.344 [2024-07-14 22:25:10.238976] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.344 pt3 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.344 22:25:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.626 22:25:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.626 "name": "raid_bdev1", 00:17:11.626 "uuid": "e13045fe-8ba5-487d-9bdd-07a91792e7f3", 00:17:11.626 "strip_size_kb": 64, 00:17:11.626 "state": "online", 00:17:11.626 "raid_level": "raid0", 00:17:11.626 "superblock": true, 00:17:11.626 "num_base_bdevs": 3, 00:17:11.626 "num_base_bdevs_discovered": 3, 00:17:11.626 "num_base_bdevs_operational": 3, 00:17:11.626 "base_bdevs_list": [ 00:17:11.626 { 00:17:11.626 "name": "pt1", 00:17:11.626 "uuid": "8c120cb3-6b9c-55f9-8851-b7830da6745e", 00:17:11.626 "is_configured": true, 00:17:11.626 "data_offset": 2048, 00:17:11.626 "data_size": 63488 00:17:11.626 }, 00:17:11.626 { 00:17:11.626 "name": "pt2", 00:17:11.626 "uuid": "c6d49b11-e8ff-5a59-93e0-b0e1a233a29a", 00:17:11.626 "is_configured": true, 00:17:11.626 "data_offset": 2048, 00:17:11.626 "data_size": 63488 00:17:11.626 }, 00:17:11.626 { 00:17:11.626 "name": "pt3", 00:17:11.626 "uuid": "aaee1937-4084-5c3c-88f5-d5ca86e655bf", 00:17:11.626 "is_configured": true, 00:17:11.626 "data_offset": 2048, 00:17:11.626 "data_size": 63488 00:17:11.626 } 00:17:11.626 ] 00:17:11.626 }' 00:17:11.626 22:25:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.626 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:17:12.196 22:25:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:12.196 22:25:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.454 [2024-07-14 22:25:11.462190] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.454 22:25:11 -- bdev/bdev_raid.sh@430 -- # '[' e13045fe-8ba5-487d-9bdd-07a91792e7f3 '!=' e13045fe-8ba5-487d-9bdd-07a91792e7f3 ']' 00:17:12.454 22:25:11 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:12.454 22:25:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:12.454 22:25:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:12.454 22:25:11 -- bdev/bdev_raid.sh@511 -- # killprocess 118494 00:17:12.454 22:25:11 -- common/autotest_common.sh@926 -- # '[' -z 118494 ']' 00:17:12.454 22:25:11 -- common/autotest_common.sh@930 -- # kill -0 118494 00:17:12.454 22:25:11 -- common/autotest_common.sh@931 -- # uname 00:17:12.454 22:25:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.454 22:25:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118494 00:17:12.454 killing process with pid 118494 00:17:12.454 22:25:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:12.454 22:25:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:12.454 22:25:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118494' 00:17:12.454 22:25:11 -- common/autotest_common.sh@945 -- # kill 118494 00:17:12.454 22:25:11 -- common/autotest_common.sh@950 -- # wait 118494 00:17:12.454 [2024-07-14 22:25:11.504354] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.454 [2024-07-14 22:25:11.504406] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.454 [2024-07-14 22:25:11.504455] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.454 [2024-07-14 22:25:11.504464] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:12.713 [2024-07-14 22:25:11.692174] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.649 ************************************ 00:17:13.649 END TEST raid_superblock_test 00:17:13.649 ************************************ 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:13.649 00:17:13.649 real 0m10.713s 00:17:13.649 user 0m18.926s 00:17:13.649 sys 0m1.204s 00:17:13.649 22:25:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.649 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:13.649 22:25:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:13.649 22:25:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:13.649 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:17:13.649 ************************************ 00:17:13.649 START TEST raid_state_function_test 00:17:13.649 ************************************ 00:17:13.649 22:25:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=118821 00:17:13.649 Process raid pid: 118821 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118821' 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118821 /var/tmp/spdk-raid.sock 00:17:13.649 22:25:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:13.650 22:25:12 -- common/autotest_common.sh@819 -- # '[' -z 118821 ']' 00:17:13.650 22:25:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.650 22:25:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.650 22:25:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.650 22:25:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.650 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:17:13.908 [2024-07-14 22:25:12.725457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:13.908 [2024-07-14 22:25:12.725734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.908 [2024-07-14 22:25:12.895577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.166 [2024-07-14 22:25:13.050768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.166 [2024-07-14 22:25:13.219922] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.731 22:25:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.731 22:25:13 -- common/autotest_common.sh@852 -- # return 0 00:17:14.731 22:25:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:14.989 [2024-07-14 22:25:13.830779] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.989 [2024-07-14 22:25:13.830884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.989 [2024-07-14 22:25:13.830897] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.989 [2024-07-14 22:25:13.830920] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.989 [2024-07-14 22:25:13.830927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:14.990 [2024-07-14 22:25:13.830965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.990 22:25:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.990 22:25:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.990 "name": "Existed_Raid", 00:17:14.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.990 "strip_size_kb": 64, 00:17:14.990 "state": "configuring", 00:17:14.990 "raid_level": "concat", 00:17:14.990 "superblock": false, 00:17:14.990 "num_base_bdevs": 3, 00:17:14.990 "num_base_bdevs_discovered": 0, 00:17:14.990 "num_base_bdevs_operational": 3, 00:17:14.990 "base_bdevs_list": [ 00:17:14.990 { 00:17:14.990 "name": "BaseBdev1", 00:17:14.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.990 "is_configured": false, 00:17:14.990 "data_offset": 0, 00:17:14.990 "data_size": 0 00:17:14.990 }, 00:17:14.990 { 00:17:14.990 "name": "BaseBdev2", 00:17:14.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.990 "is_configured": false, 00:17:14.990 "data_offset": 0, 00:17:14.990 "data_size": 0 00:17:14.990 }, 00:17:14.990 { 00:17:14.990 "name": "BaseBdev3", 00:17:14.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.990 "is_configured": false, 00:17:14.990 "data_offset": 0, 00:17:14.990 "data_size": 0 00:17:14.990 } 00:17:14.990 ] 00:17:14.990 }' 00:17:14.990 22:25:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.990 22:25:14 -- common/autotest_common.sh@10 -- # set +x 00:17:15.923 22:25:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.923 [2024-07-14 22:25:14.794860] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.923 [2024-07-14 22:25:14.794911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:15.923 22:25:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:16.181 [2024-07-14 22:25:15.046931] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:16.181 [2024-07-14 22:25:15.046986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:16.181 [2024-07-14 22:25:15.046997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.181 [2024-07-14 22:25:15.047013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.181 [2024-07-14 22:25:15.047019] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:16.181 [2024-07-14 22:25:15.047048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:16.181 22:25:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:16.439 [2024-07-14 22:25:15.248140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.439 BaseBdev1 00:17:16.439 22:25:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:16.439 22:25:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:16.439 22:25:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:16.439 22:25:15 -- common/autotest_common.sh@889 -- # local i 00:17:16.439 22:25:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:16.439 22:25:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:16.439 22:25:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.439 22:25:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:16.697 [ 00:17:16.697 { 00:17:16.697 "name": "BaseBdev1", 00:17:16.697 "aliases": [ 00:17:16.697 "4bf7c9c0-9ef9-4595-8c2e-c6002055395d" 00:17:16.697 ], 00:17:16.697 "product_name": "Malloc disk", 00:17:16.697 "block_size": 512, 00:17:16.697 "num_blocks": 65536, 00:17:16.697 "uuid": "4bf7c9c0-9ef9-4595-8c2e-c6002055395d", 00:17:16.697 "assigned_rate_limits": { 00:17:16.697 "rw_ios_per_sec": 0, 00:17:16.697 "rw_mbytes_per_sec": 0, 00:17:16.697 "r_mbytes_per_sec": 0, 00:17:16.697 "w_mbytes_per_sec": 0 00:17:16.697 }, 00:17:16.697 "claimed": true, 00:17:16.697 "claim_type": "exclusive_write", 00:17:16.697 "zoned": false, 00:17:16.697 "supported_io_types": { 00:17:16.697 "read": true, 00:17:16.697 "write": true, 00:17:16.697 "unmap": true, 00:17:16.697 "write_zeroes": true, 00:17:16.697 "flush": true, 00:17:16.697 "reset": true, 00:17:16.697 "compare": false, 00:17:16.697 "compare_and_write": false, 00:17:16.697 "abort": true, 00:17:16.697 "nvme_admin": false, 00:17:16.697 "nvme_io": false 00:17:16.697 }, 00:17:16.697 "memory_domains": [ 00:17:16.697 { 00:17:16.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.697 "dma_device_type": 2 00:17:16.697 } 00:17:16.697 ], 00:17:16.697 "driver_specific": {} 00:17:16.697 } 00:17:16.697 ] 00:17:16.697 22:25:15 -- common/autotest_common.sh@895 -- # return 0 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.697 22:25:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.956 22:25:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.956 "name": "Existed_Raid", 00:17:16.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.956 "strip_size_kb": 64, 00:17:16.956 "state": "configuring", 00:17:16.956 "raid_level": "concat", 00:17:16.956 "superblock": false, 00:17:16.956 "num_base_bdevs": 3, 00:17:16.956 "num_base_bdevs_discovered": 1, 00:17:16.956 "num_base_bdevs_operational": 3, 00:17:16.956 "base_bdevs_list": [ 00:17:16.956 { 00:17:16.956 "name": "BaseBdev1", 00:17:16.956 "uuid": "4bf7c9c0-9ef9-4595-8c2e-c6002055395d", 00:17:16.956 "is_configured": true, 00:17:16.956 "data_offset": 0, 00:17:16.956 "data_size": 65536 00:17:16.956 }, 00:17:16.956 { 00:17:16.956 "name": "BaseBdev2", 00:17:16.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.956 "is_configured": false, 00:17:16.956 "data_offset": 0, 00:17:16.956 "data_size": 0 00:17:16.956 }, 00:17:16.956 { 00:17:16.956 "name": "BaseBdev3", 00:17:16.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.956 "is_configured": false, 00:17:16.956 "data_offset": 0, 00:17:16.956 "data_size": 0 00:17:16.956 } 00:17:16.956 ] 00:17:16.956 }' 00:17:16.956 22:25:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.956 22:25:15 -- common/autotest_common.sh@10 -- # set +x 00:17:17.523 22:25:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:17.782 [2024-07-14 22:25:16.664428] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:17.782 [2024-07-14 22:25:16.664465] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:17.782 22:25:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:17.782 22:25:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:17.782 [2024-07-14 22:25:16.836501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.782 [2024-07-14 22:25:16.838098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.782 [2024-07-14 22:25:16.838152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.782 [2024-07-14 22:25:16.838163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:17.782 [2024-07-14 22:25:16.838184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.040 22:25:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.040 22:25:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.040 "name": "Existed_Raid", 00:17:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.040 "strip_size_kb": 64, 00:17:18.040 "state": "configuring", 00:17:18.040 "raid_level": "concat", 00:17:18.040 "superblock": false, 00:17:18.040 "num_base_bdevs": 3, 00:17:18.040 "num_base_bdevs_discovered": 1, 00:17:18.040 "num_base_bdevs_operational": 3, 00:17:18.040 "base_bdevs_list": [ 00:17:18.040 { 00:17:18.040 "name": "BaseBdev1", 00:17:18.040 "uuid": "4bf7c9c0-9ef9-4595-8c2e-c6002055395d", 00:17:18.040 "is_configured": true, 00:17:18.040 "data_offset": 0, 00:17:18.040 "data_size": 65536 00:17:18.040 }, 00:17:18.040 { 00:17:18.040 "name": "BaseBdev2", 00:17:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.040 "is_configured": false, 00:17:18.040 "data_offset": 0, 00:17:18.040 "data_size": 0 00:17:18.040 }, 00:17:18.040 { 00:17:18.040 "name": "BaseBdev3", 00:17:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.040 "is_configured": false, 00:17:18.040 "data_offset": 0, 00:17:18.040 "data_size": 0 00:17:18.040 } 00:17:18.040 ] 00:17:18.040 }' 00:17:18.040 22:25:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.040 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:17:18.971 22:25:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:18.971 [2024-07-14 22:25:17.888011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.971 BaseBdev2 00:17:18.971 22:25:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:18.971 22:25:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:18.971 22:25:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:18.971 22:25:17 -- common/autotest_common.sh@889 -- # local i 00:17:18.971 22:25:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:18.971 22:25:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:18.972 22:25:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.229 22:25:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:19.229 [ 00:17:19.229 { 00:17:19.229 "name": "BaseBdev2", 00:17:19.229 "aliases": [ 00:17:19.229 "ef73172b-1f4b-4aab-8821-e5c7483e9c38" 00:17:19.229 ], 00:17:19.229 "product_name": "Malloc disk", 00:17:19.229 "block_size": 512, 00:17:19.229 "num_blocks": 65536, 00:17:19.229 "uuid": "ef73172b-1f4b-4aab-8821-e5c7483e9c38", 00:17:19.229 "assigned_rate_limits": { 00:17:19.229 "rw_ios_per_sec": 0, 00:17:19.229 "rw_mbytes_per_sec": 0, 00:17:19.229 "r_mbytes_per_sec": 0, 00:17:19.229 "w_mbytes_per_sec": 0 00:17:19.229 }, 00:17:19.229 "claimed": true, 00:17:19.229 "claim_type": "exclusive_write", 00:17:19.229 "zoned": false, 00:17:19.229 "supported_io_types": { 00:17:19.229 "read": true, 00:17:19.229 "write": true, 00:17:19.229 "unmap": true, 00:17:19.229 "write_zeroes": true, 00:17:19.229 "flush": true, 00:17:19.229 "reset": true, 00:17:19.229 "compare": false, 00:17:19.229 "compare_and_write": false, 00:17:19.229 "abort": true, 00:17:19.229 "nvme_admin": false, 00:17:19.229 "nvme_io": false 00:17:19.229 }, 00:17:19.229 "memory_domains": [ 00:17:19.229 { 00:17:19.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.229 "dma_device_type": 2 00:17:19.229 } 00:17:19.229 ], 00:17:19.229 "driver_specific": {} 00:17:19.229 } 00:17:19.229 ] 00:17:19.229 22:25:18 -- common/autotest_common.sh@895 -- # return 0 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.229 22:25:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.486 22:25:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.486 "name": "Existed_Raid", 00:17:19.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.487 "strip_size_kb": 64, 00:17:19.487 "state": "configuring", 00:17:19.487 "raid_level": "concat", 00:17:19.487 "superblock": false, 00:17:19.487 "num_base_bdevs": 3, 00:17:19.487 "num_base_bdevs_discovered": 2, 00:17:19.487 "num_base_bdevs_operational": 3, 00:17:19.487 "base_bdevs_list": [ 00:17:19.487 { 00:17:19.487 "name": "BaseBdev1", 00:17:19.487 "uuid": "4bf7c9c0-9ef9-4595-8c2e-c6002055395d", 00:17:19.487 "is_configured": true, 00:17:19.487 "data_offset": 0, 00:17:19.487 "data_size": 65536 00:17:19.487 }, 00:17:19.487 { 00:17:19.487 "name": "BaseBdev2", 00:17:19.487 "uuid": "ef73172b-1f4b-4aab-8821-e5c7483e9c38", 00:17:19.487 "is_configured": true, 00:17:19.487 "data_offset": 0, 00:17:19.487 "data_size": 65536 00:17:19.487 }, 00:17:19.487 { 00:17:19.487 "name": "BaseBdev3", 00:17:19.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.487 "is_configured": false, 00:17:19.487 "data_offset": 0, 00:17:19.487 "data_size": 0 00:17:19.487 } 00:17:19.487 ] 00:17:19.487 }' 00:17:19.487 22:25:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.487 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:17:20.420 22:25:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:20.420 [2024-07-14 22:25:19.375998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.420 [2024-07-14 22:25:19.376044] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:20.420 [2024-07-14 22:25:19.376053] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:20.420 [2024-07-14 22:25:19.376168] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:20.420 [2024-07-14 22:25:19.376529] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:20.420 [2024-07-14 22:25:19.376553] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:20.420 [2024-07-14 22:25:19.376810] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.420 BaseBdev3 00:17:20.420 22:25:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:20.420 22:25:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:20.420 22:25:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:20.420 22:25:19 -- common/autotest_common.sh@889 -- # local i 00:17:20.420 22:25:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:20.420 22:25:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:20.420 22:25:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.678 22:25:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:20.936 [ 00:17:20.936 { 00:17:20.936 "name": "BaseBdev3", 00:17:20.936 "aliases": [ 00:17:20.936 "dba74d5d-5257-4bab-ad38-4ba7d0e7df25" 00:17:20.936 ], 00:17:20.936 "product_name": "Malloc disk", 00:17:20.936 "block_size": 512, 00:17:20.936 "num_blocks": 65536, 00:17:20.936 "uuid": "dba74d5d-5257-4bab-ad38-4ba7d0e7df25", 00:17:20.936 "assigned_rate_limits": { 00:17:20.936 "rw_ios_per_sec": 0, 00:17:20.936 "rw_mbytes_per_sec": 0, 00:17:20.936 "r_mbytes_per_sec": 0, 00:17:20.936 "w_mbytes_per_sec": 0 00:17:20.936 }, 00:17:20.936 "claimed": true, 00:17:20.936 "claim_type": "exclusive_write", 00:17:20.936 "zoned": false, 00:17:20.936 "supported_io_types": { 00:17:20.936 "read": true, 00:17:20.936 "write": true, 00:17:20.936 "unmap": true, 00:17:20.936 "write_zeroes": true, 00:17:20.936 "flush": true, 00:17:20.936 "reset": true, 00:17:20.936 "compare": false, 00:17:20.936 "compare_and_write": false, 00:17:20.936 "abort": true, 00:17:20.936 "nvme_admin": false, 00:17:20.936 "nvme_io": false 00:17:20.936 }, 00:17:20.936 "memory_domains": [ 00:17:20.936 { 00:17:20.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.936 "dma_device_type": 2 00:17:20.936 } 00:17:20.936 ], 00:17:20.936 "driver_specific": {} 00:17:20.936 } 00:17:20.936 ] 00:17:20.936 22:25:19 -- common/autotest_common.sh@895 -- # return 0 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.936 22:25:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.193 22:25:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.193 "name": "Existed_Raid", 00:17:21.193 "uuid": "434c2105-6934-43e8-ba51-a9376ccc47c3", 00:17:21.193 "strip_size_kb": 64, 00:17:21.193 "state": "online", 00:17:21.193 "raid_level": "concat", 00:17:21.193 "superblock": false, 00:17:21.193 "num_base_bdevs": 3, 00:17:21.193 "num_base_bdevs_discovered": 3, 00:17:21.193 "num_base_bdevs_operational": 3, 00:17:21.193 "base_bdevs_list": [ 00:17:21.193 { 00:17:21.193 "name": "BaseBdev1", 00:17:21.193 "uuid": "4bf7c9c0-9ef9-4595-8c2e-c6002055395d", 00:17:21.193 "is_configured": true, 00:17:21.193 "data_offset": 0, 00:17:21.193 "data_size": 65536 00:17:21.193 }, 00:17:21.193 { 00:17:21.193 "name": "BaseBdev2", 00:17:21.193 "uuid": "ef73172b-1f4b-4aab-8821-e5c7483e9c38", 00:17:21.193 "is_configured": true, 00:17:21.193 "data_offset": 0, 00:17:21.193 "data_size": 65536 00:17:21.193 }, 00:17:21.193 { 00:17:21.193 "name": "BaseBdev3", 00:17:21.193 "uuid": "dba74d5d-5257-4bab-ad38-4ba7d0e7df25", 00:17:21.193 "is_configured": true, 00:17:21.193 "data_offset": 0, 00:17:21.193 "data_size": 65536 00:17:21.193 } 00:17:21.194 ] 00:17:21.194 }' 00:17:21.194 22:25:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.194 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:17:21.759 22:25:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.759 [2024-07-14 22:25:20.760401] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.759 [2024-07-14 22:25:20.760427] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.759 [2024-07-14 22:25:20.760480] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.026 22:25:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.295 22:25:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.295 "name": "Existed_Raid", 00:17:22.295 "uuid": "434c2105-6934-43e8-ba51-a9376ccc47c3", 00:17:22.295 "strip_size_kb": 64, 00:17:22.295 "state": "offline", 00:17:22.295 "raid_level": "concat", 00:17:22.295 "superblock": false, 00:17:22.295 "num_base_bdevs": 3, 00:17:22.295 "num_base_bdevs_discovered": 2, 00:17:22.295 "num_base_bdevs_operational": 2, 00:17:22.295 "base_bdevs_list": [ 00:17:22.295 { 00:17:22.295 "name": null, 00:17:22.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.295 "is_configured": false, 00:17:22.295 "data_offset": 0, 00:17:22.295 "data_size": 65536 00:17:22.295 }, 00:17:22.295 { 00:17:22.295 "name": "BaseBdev2", 00:17:22.295 "uuid": "ef73172b-1f4b-4aab-8821-e5c7483e9c38", 00:17:22.295 "is_configured": true, 00:17:22.295 "data_offset": 0, 00:17:22.295 "data_size": 65536 00:17:22.295 }, 00:17:22.295 { 00:17:22.295 "name": "BaseBdev3", 00:17:22.295 "uuid": "dba74d5d-5257-4bab-ad38-4ba7d0e7df25", 00:17:22.295 "is_configured": true, 00:17:22.295 "data_offset": 0, 00:17:22.295 "data_size": 65536 00:17:22.295 } 00:17:22.295 ] 00:17:22.295 }' 00:17:22.295 22:25:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.295 22:25:21 -- common/autotest_common.sh@10 -- # set +x 00:17:22.861 22:25:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:22.861 22:25:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:22.861 22:25:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.861 22:25:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:23.119 22:25:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:23.119 22:25:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.119 22:25:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:23.375 [2024-07-14 22:25:22.191866] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:23.375 22:25:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:23.375 22:25:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:23.375 22:25:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.375 22:25:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:23.633 22:25:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:23.633 22:25:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.633 22:25:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:23.891 [2024-07-14 22:25:22.763337] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.891 [2024-07-14 22:25:22.763393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:23.891 22:25:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:23.891 22:25:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:23.891 22:25:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.891 22:25:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:24.149 22:25:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:24.149 22:25:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:24.149 22:25:23 -- bdev/bdev_raid.sh@287 -- # killprocess 118821 00:17:24.149 22:25:23 -- common/autotest_common.sh@926 -- # '[' -z 118821 ']' 00:17:24.149 22:25:23 -- common/autotest_common.sh@930 -- # kill -0 118821 00:17:24.149 22:25:23 -- common/autotest_common.sh@931 -- # uname 00:17:24.149 22:25:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.149 22:25:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118821 00:17:24.149 killing process with pid 118821 00:17:24.149 22:25:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:24.149 22:25:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:24.149 22:25:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118821' 00:17:24.149 22:25:23 -- common/autotest_common.sh@945 -- # kill 118821 00:17:24.149 22:25:23 -- common/autotest_common.sh@950 -- # wait 118821 00:17:24.149 [2024-07-14 22:25:23.045709] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.149 [2024-07-14 22:25:23.045819] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.081 ************************************ 00:17:25.081 END TEST raid_state_function_test 00:17:25.081 ************************************ 00:17:25.081 22:25:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:25.081 00:17:25.081 real 0m11.299s 00:17:25.081 user 0m20.052s 00:17:25.081 sys 0m1.343s 00:17:25.081 22:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.081 22:25:23 -- common/autotest_common.sh@10 -- # set +x 00:17:25.081 22:25:23 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:25.081 22:25:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:25.081 22:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:25.081 22:25:23 -- common/autotest_common.sh@10 -- # set +x 00:17:25.081 ************************************ 00:17:25.081 START TEST raid_state_function_test_sb 00:17:25.081 ************************************ 00:17:25.081 22:25:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=119206 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119206' 00:17:25.081 Process raid pid: 119206 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119206 /var/tmp/spdk-raid.sock 00:17:25.081 22:25:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:25.081 22:25:24 -- common/autotest_common.sh@819 -- # '[' -z 119206 ']' 00:17:25.081 22:25:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:25.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:25.081 22:25:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.081 22:25:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:25.081 22:25:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.081 22:25:24 -- common/autotest_common.sh@10 -- # set +x 00:17:25.081 [2024-07-14 22:25:24.083757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.081 [2024-07-14 22:25:24.083971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.339 [2024-07-14 22:25:24.254884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.598 [2024-07-14 22:25:24.459269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.598 [2024-07-14 22:25:24.623627] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.164 22:25:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.164 22:25:24 -- common/autotest_common.sh@852 -- # return 0 00:17:26.164 22:25:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:26.164 [2024-07-14 22:25:25.214038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.164 [2024-07-14 22:25:25.214107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.164 [2024-07-14 22:25:25.214136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.164 [2024-07-14 22:25:25.214154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.164 [2024-07-14 22:25:25.214160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.164 [2024-07-14 22:25:25.214199] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.423 22:25:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.423 "name": "Existed_Raid", 00:17:26.423 "uuid": "5738791c-d850-45fc-8eaa-e7f0a9c9d913", 00:17:26.424 "strip_size_kb": 64, 00:17:26.424 "state": "configuring", 00:17:26.424 "raid_level": "concat", 00:17:26.424 "superblock": true, 00:17:26.424 "num_base_bdevs": 3, 00:17:26.424 "num_base_bdevs_discovered": 0, 00:17:26.424 "num_base_bdevs_operational": 3, 00:17:26.424 "base_bdevs_list": [ 00:17:26.424 { 00:17:26.424 "name": "BaseBdev1", 00:17:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.424 "is_configured": false, 00:17:26.424 "data_offset": 0, 00:17:26.424 "data_size": 0 00:17:26.424 }, 00:17:26.424 { 00:17:26.424 "name": "BaseBdev2", 00:17:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.424 "is_configured": false, 00:17:26.424 "data_offset": 0, 00:17:26.424 "data_size": 0 00:17:26.424 }, 00:17:26.424 { 00:17:26.424 "name": "BaseBdev3", 00:17:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.424 "is_configured": false, 00:17:26.424 "data_offset": 0, 00:17:26.424 "data_size": 0 00:17:26.424 } 00:17:26.424 ] 00:17:26.424 }' 00:17:26.424 22:25:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.424 22:25:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.360 22:25:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:27.360 [2024-07-14 22:25:26.322138] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.360 [2024-07-14 22:25:26.322166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:27.360 22:25:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:27.619 [2024-07-14 22:25:26.594231] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.619 [2024-07-14 22:25:26.594290] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.619 [2024-07-14 22:25:26.594301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.619 [2024-07-14 22:25:26.594317] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.619 [2024-07-14 22:25:26.594323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:27.619 [2024-07-14 22:25:26.594350] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:27.619 22:25:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.877 [2024-07-14 22:25:26.823513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.877 BaseBdev1 00:17:27.877 22:25:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:27.877 22:25:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:27.877 22:25:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:27.877 22:25:26 -- common/autotest_common.sh@889 -- # local i 00:17:27.877 22:25:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:27.877 22:25:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:27.877 22:25:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.136 22:25:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.394 [ 00:17:28.394 { 00:17:28.394 "name": "BaseBdev1", 00:17:28.394 "aliases": [ 00:17:28.394 "5a76ac95-11e3-4594-8042-1f3031d198a3" 00:17:28.394 ], 00:17:28.394 "product_name": "Malloc disk", 00:17:28.394 "block_size": 512, 00:17:28.394 "num_blocks": 65536, 00:17:28.394 "uuid": "5a76ac95-11e3-4594-8042-1f3031d198a3", 00:17:28.394 "assigned_rate_limits": { 00:17:28.394 "rw_ios_per_sec": 0, 00:17:28.394 "rw_mbytes_per_sec": 0, 00:17:28.394 "r_mbytes_per_sec": 0, 00:17:28.394 "w_mbytes_per_sec": 0 00:17:28.394 }, 00:17:28.394 "claimed": true, 00:17:28.394 "claim_type": "exclusive_write", 00:17:28.394 "zoned": false, 00:17:28.394 "supported_io_types": { 00:17:28.394 "read": true, 00:17:28.394 "write": true, 00:17:28.394 "unmap": true, 00:17:28.394 "write_zeroes": true, 00:17:28.394 "flush": true, 00:17:28.394 "reset": true, 00:17:28.394 "compare": false, 00:17:28.394 "compare_and_write": false, 00:17:28.394 "abort": true, 00:17:28.394 "nvme_admin": false, 00:17:28.394 "nvme_io": false 00:17:28.394 }, 00:17:28.394 "memory_domains": [ 00:17:28.394 { 00:17:28.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.394 "dma_device_type": 2 00:17:28.394 } 00:17:28.394 ], 00:17:28.394 "driver_specific": {} 00:17:28.394 } 00:17:28.394 ] 00:17:28.394 22:25:27 -- common/autotest_common.sh@895 -- # return 0 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.394 22:25:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.653 22:25:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.653 "name": "Existed_Raid", 00:17:28.653 "uuid": "f2c7fb3c-f260-4bbe-9910-8202850c564b", 00:17:28.653 "strip_size_kb": 64, 00:17:28.653 "state": "configuring", 00:17:28.653 "raid_level": "concat", 00:17:28.653 "superblock": true, 00:17:28.653 "num_base_bdevs": 3, 00:17:28.653 "num_base_bdevs_discovered": 1, 00:17:28.653 "num_base_bdevs_operational": 3, 00:17:28.653 "base_bdevs_list": [ 00:17:28.653 { 00:17:28.653 "name": "BaseBdev1", 00:17:28.653 "uuid": "5a76ac95-11e3-4594-8042-1f3031d198a3", 00:17:28.653 "is_configured": true, 00:17:28.653 "data_offset": 2048, 00:17:28.653 "data_size": 63488 00:17:28.653 }, 00:17:28.653 { 00:17:28.653 "name": "BaseBdev2", 00:17:28.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.653 "is_configured": false, 00:17:28.653 "data_offset": 0, 00:17:28.653 "data_size": 0 00:17:28.653 }, 00:17:28.653 { 00:17:28.653 "name": "BaseBdev3", 00:17:28.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.653 "is_configured": false, 00:17:28.653 "data_offset": 0, 00:17:28.653 "data_size": 0 00:17:28.653 } 00:17:28.653 ] 00:17:28.653 }' 00:17:28.653 22:25:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.653 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:17:29.220 22:25:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.478 [2024-07-14 22:25:28.431872] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.478 [2024-07-14 22:25:28.431910] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:29.478 22:25:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:29.478 22:25:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:29.737 22:25:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:29.996 BaseBdev1 00:17:29.996 22:25:28 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:29.996 22:25:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:29.996 22:25:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:29.996 22:25:28 -- common/autotest_common.sh@889 -- # local i 00:17:29.996 22:25:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:29.996 22:25:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:29.996 22:25:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.254 22:25:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.513 [ 00:17:30.513 { 00:17:30.513 "name": "BaseBdev1", 00:17:30.513 "aliases": [ 00:17:30.513 "2bb42bc2-ee0f-49ab-a8ce-b25fc62372fe" 00:17:30.513 ], 00:17:30.513 "product_name": "Malloc disk", 00:17:30.513 "block_size": 512, 00:17:30.513 "num_blocks": 65536, 00:17:30.513 "uuid": "2bb42bc2-ee0f-49ab-a8ce-b25fc62372fe", 00:17:30.513 "assigned_rate_limits": { 00:17:30.513 "rw_ios_per_sec": 0, 00:17:30.513 "rw_mbytes_per_sec": 0, 00:17:30.513 "r_mbytes_per_sec": 0, 00:17:30.513 "w_mbytes_per_sec": 0 00:17:30.513 }, 00:17:30.513 "claimed": false, 00:17:30.513 "zoned": false, 00:17:30.513 "supported_io_types": { 00:17:30.513 "read": true, 00:17:30.513 "write": true, 00:17:30.513 "unmap": true, 00:17:30.513 "write_zeroes": true, 00:17:30.513 "flush": true, 00:17:30.513 "reset": true, 00:17:30.513 "compare": false, 00:17:30.513 "compare_and_write": false, 00:17:30.513 "abort": true, 00:17:30.513 "nvme_admin": false, 00:17:30.513 "nvme_io": false 00:17:30.513 }, 00:17:30.513 "memory_domains": [ 00:17:30.513 { 00:17:30.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.513 "dma_device_type": 2 00:17:30.513 } 00:17:30.513 ], 00:17:30.513 "driver_specific": {} 00:17:30.513 } 00:17:30.513 ] 00:17:30.513 22:25:29 -- common/autotest_common.sh@895 -- # return 0 00:17:30.513 22:25:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:30.772 [2024-07-14 22:25:29.594818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.772 [2024-07-14 22:25:29.596492] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.772 [2024-07-14 22:25:29.596556] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.772 [2024-07-14 22:25:29.596567] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.772 [2024-07-14 22:25:29.596589] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.772 22:25:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.031 22:25:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.031 "name": "Existed_Raid", 00:17:31.031 "uuid": "d81033ab-72a5-485f-81df-c2fe350d250a", 00:17:31.031 "strip_size_kb": 64, 00:17:31.031 "state": "configuring", 00:17:31.031 "raid_level": "concat", 00:17:31.031 "superblock": true, 00:17:31.031 "num_base_bdevs": 3, 00:17:31.031 "num_base_bdevs_discovered": 1, 00:17:31.031 "num_base_bdevs_operational": 3, 00:17:31.031 "base_bdevs_list": [ 00:17:31.031 { 00:17:31.031 "name": "BaseBdev1", 00:17:31.031 "uuid": "2bb42bc2-ee0f-49ab-a8ce-b25fc62372fe", 00:17:31.031 "is_configured": true, 00:17:31.031 "data_offset": 2048, 00:17:31.031 "data_size": 63488 00:17:31.031 }, 00:17:31.031 { 00:17:31.031 "name": "BaseBdev2", 00:17:31.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.031 "is_configured": false, 00:17:31.031 "data_offset": 0, 00:17:31.031 "data_size": 0 00:17:31.031 }, 00:17:31.031 { 00:17:31.031 "name": "BaseBdev3", 00:17:31.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.031 "is_configured": false, 00:17:31.031 "data_offset": 0, 00:17:31.031 "data_size": 0 00:17:31.031 } 00:17:31.031 ] 00:17:31.031 }' 00:17:31.031 22:25:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.031 22:25:29 -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 22:25:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.857 [2024-07-14 22:25:30.684320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.857 BaseBdev2 00:17:31.857 22:25:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:31.857 22:25:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:31.857 22:25:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:31.857 22:25:30 -- common/autotest_common.sh@889 -- # local i 00:17:31.857 22:25:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:31.857 22:25:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:31.857 22:25:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.857 22:25:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:32.116 [ 00:17:32.116 { 00:17:32.116 "name": "BaseBdev2", 00:17:32.116 "aliases": [ 00:17:32.116 "1f496dc1-9459-4692-8406-8ed37507d196" 00:17:32.116 ], 00:17:32.116 "product_name": "Malloc disk", 00:17:32.116 "block_size": 512, 00:17:32.116 "num_blocks": 65536, 00:17:32.116 "uuid": "1f496dc1-9459-4692-8406-8ed37507d196", 00:17:32.116 "assigned_rate_limits": { 00:17:32.116 "rw_ios_per_sec": 0, 00:17:32.116 "rw_mbytes_per_sec": 0, 00:17:32.116 "r_mbytes_per_sec": 0, 00:17:32.116 "w_mbytes_per_sec": 0 00:17:32.116 }, 00:17:32.116 "claimed": true, 00:17:32.116 "claim_type": "exclusive_write", 00:17:32.116 "zoned": false, 00:17:32.116 "supported_io_types": { 00:17:32.116 "read": true, 00:17:32.116 "write": true, 00:17:32.116 "unmap": true, 00:17:32.116 "write_zeroes": true, 00:17:32.116 "flush": true, 00:17:32.116 "reset": true, 00:17:32.116 "compare": false, 00:17:32.116 "compare_and_write": false, 00:17:32.116 "abort": true, 00:17:32.116 "nvme_admin": false, 00:17:32.116 "nvme_io": false 00:17:32.116 }, 00:17:32.116 "memory_domains": [ 00:17:32.116 { 00:17:32.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.116 "dma_device_type": 2 00:17:32.116 } 00:17:32.116 ], 00:17:32.116 "driver_specific": {} 00:17:32.116 } 00:17:32.116 ] 00:17:32.116 22:25:31 -- common/autotest_common.sh@895 -- # return 0 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.116 22:25:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.375 22:25:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.375 "name": "Existed_Raid", 00:17:32.375 "uuid": "d81033ab-72a5-485f-81df-c2fe350d250a", 00:17:32.375 "strip_size_kb": 64, 00:17:32.375 "state": "configuring", 00:17:32.375 "raid_level": "concat", 00:17:32.375 "superblock": true, 00:17:32.375 "num_base_bdevs": 3, 00:17:32.375 "num_base_bdevs_discovered": 2, 00:17:32.375 "num_base_bdevs_operational": 3, 00:17:32.375 "base_bdevs_list": [ 00:17:32.375 { 00:17:32.375 "name": "BaseBdev1", 00:17:32.375 "uuid": "2bb42bc2-ee0f-49ab-a8ce-b25fc62372fe", 00:17:32.375 "is_configured": true, 00:17:32.375 "data_offset": 2048, 00:17:32.375 "data_size": 63488 00:17:32.375 }, 00:17:32.375 { 00:17:32.375 "name": "BaseBdev2", 00:17:32.375 "uuid": "1f496dc1-9459-4692-8406-8ed37507d196", 00:17:32.375 "is_configured": true, 00:17:32.375 "data_offset": 2048, 00:17:32.375 "data_size": 63488 00:17:32.375 }, 00:17:32.375 { 00:17:32.375 "name": "BaseBdev3", 00:17:32.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.375 "is_configured": false, 00:17:32.375 "data_offset": 0, 00:17:32.375 "data_size": 0 00:17:32.375 } 00:17:32.375 ] 00:17:32.375 }' 00:17:32.375 22:25:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.375 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:17:32.941 22:25:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:33.201 [2024-07-14 22:25:32.048266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.201 [2024-07-14 22:25:32.048478] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:33.201 [2024-07-14 22:25:32.048492] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:33.201 BaseBdev3 00:17:33.201 [2024-07-14 22:25:32.048619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:33.201 [2024-07-14 22:25:32.048933] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:33.201 [2024-07-14 22:25:32.048955] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:33.201 [2024-07-14 22:25:32.049081] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.201 22:25:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:33.201 22:25:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:33.201 22:25:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.201 22:25:32 -- common/autotest_common.sh@889 -- # local i 00:17:33.201 22:25:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.201 22:25:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.201 22:25:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.460 22:25:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:33.460 [ 00:17:33.460 { 00:17:33.460 "name": "BaseBdev3", 00:17:33.460 "aliases": [ 00:17:33.460 "6f0e5fe1-54e1-425f-a835-3a2a01f84376" 00:17:33.460 ], 00:17:33.460 "product_name": "Malloc disk", 00:17:33.460 "block_size": 512, 00:17:33.460 "num_blocks": 65536, 00:17:33.460 "uuid": "6f0e5fe1-54e1-425f-a835-3a2a01f84376", 00:17:33.460 "assigned_rate_limits": { 00:17:33.460 "rw_ios_per_sec": 0, 00:17:33.460 "rw_mbytes_per_sec": 0, 00:17:33.460 "r_mbytes_per_sec": 0, 00:17:33.460 "w_mbytes_per_sec": 0 00:17:33.460 }, 00:17:33.460 "claimed": true, 00:17:33.460 "claim_type": "exclusive_write", 00:17:33.460 "zoned": false, 00:17:33.460 "supported_io_types": { 00:17:33.460 "read": true, 00:17:33.460 "write": true, 00:17:33.460 "unmap": true, 00:17:33.460 "write_zeroes": true, 00:17:33.460 "flush": true, 00:17:33.460 "reset": true, 00:17:33.460 "compare": false, 00:17:33.460 "compare_and_write": false, 00:17:33.460 "abort": true, 00:17:33.460 "nvme_admin": false, 00:17:33.460 "nvme_io": false 00:17:33.460 }, 00:17:33.460 "memory_domains": [ 00:17:33.460 { 00:17:33.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.460 "dma_device_type": 2 00:17:33.460 } 00:17:33.460 ], 00:17:33.460 "driver_specific": {} 00:17:33.460 } 00:17:33.460 ] 00:17:33.460 22:25:32 -- common/autotest_common.sh@895 -- # return 0 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.460 22:25:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.719 22:25:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.719 "name": "Existed_Raid", 00:17:33.719 "uuid": "d81033ab-72a5-485f-81df-c2fe350d250a", 00:17:33.719 "strip_size_kb": 64, 00:17:33.719 "state": "online", 00:17:33.719 "raid_level": "concat", 00:17:33.719 "superblock": true, 00:17:33.719 "num_base_bdevs": 3, 00:17:33.719 "num_base_bdevs_discovered": 3, 00:17:33.719 "num_base_bdevs_operational": 3, 00:17:33.719 "base_bdevs_list": [ 00:17:33.719 { 00:17:33.719 "name": "BaseBdev1", 00:17:33.719 "uuid": "2bb42bc2-ee0f-49ab-a8ce-b25fc62372fe", 00:17:33.719 "is_configured": true, 00:17:33.719 "data_offset": 2048, 00:17:33.719 "data_size": 63488 00:17:33.719 }, 00:17:33.719 { 00:17:33.719 "name": "BaseBdev2", 00:17:33.719 "uuid": "1f496dc1-9459-4692-8406-8ed37507d196", 00:17:33.719 "is_configured": true, 00:17:33.719 "data_offset": 2048, 00:17:33.719 "data_size": 63488 00:17:33.719 }, 00:17:33.719 { 00:17:33.719 "name": "BaseBdev3", 00:17:33.719 "uuid": "6f0e5fe1-54e1-425f-a835-3a2a01f84376", 00:17:33.719 "is_configured": true, 00:17:33.719 "data_offset": 2048, 00:17:33.719 "data_size": 63488 00:17:33.719 } 00:17:33.719 ] 00:17:33.719 }' 00:17:33.719 22:25:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.719 22:25:32 -- common/autotest_common.sh@10 -- # set +x 00:17:34.350 22:25:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:34.606 [2024-07-14 22:25:33.476589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.606 [2024-07-14 22:25:33.476616] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.606 [2024-07-14 22:25:33.476666] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.606 22:25:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:34.606 22:25:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:34.606 22:25:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.607 22:25:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.865 22:25:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.865 "name": "Existed_Raid", 00:17:34.865 "uuid": "d81033ab-72a5-485f-81df-c2fe350d250a", 00:17:34.865 "strip_size_kb": 64, 00:17:34.865 "state": "offline", 00:17:34.865 "raid_level": "concat", 00:17:34.865 "superblock": true, 00:17:34.865 "num_base_bdevs": 3, 00:17:34.865 "num_base_bdevs_discovered": 2, 00:17:34.865 "num_base_bdevs_operational": 2, 00:17:34.865 "base_bdevs_list": [ 00:17:34.865 { 00:17:34.865 "name": null, 00:17:34.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.865 "is_configured": false, 00:17:34.865 "data_offset": 2048, 00:17:34.865 "data_size": 63488 00:17:34.865 }, 00:17:34.865 { 00:17:34.865 "name": "BaseBdev2", 00:17:34.865 "uuid": "1f496dc1-9459-4692-8406-8ed37507d196", 00:17:34.865 "is_configured": true, 00:17:34.865 "data_offset": 2048, 00:17:34.865 "data_size": 63488 00:17:34.865 }, 00:17:34.865 { 00:17:34.865 "name": "BaseBdev3", 00:17:34.865 "uuid": "6f0e5fe1-54e1-425f-a835-3a2a01f84376", 00:17:34.865 "is_configured": true, 00:17:34.865 "data_offset": 2048, 00:17:34.865 "data_size": 63488 00:17:34.865 } 00:17:34.865 ] 00:17:34.865 }' 00:17:34.865 22:25:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.865 22:25:33 -- common/autotest_common.sh@10 -- # set +x 00:17:35.430 22:25:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:35.430 22:25:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:35.430 22:25:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.430 22:25:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:35.686 22:25:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:35.686 22:25:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:35.686 22:25:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:35.943 [2024-07-14 22:25:34.871394] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:35.943 22:25:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:35.943 22:25:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:35.943 22:25:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.943 22:25:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:36.200 22:25:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:36.200 22:25:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.200 22:25:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:36.457 [2024-07-14 22:25:35.378579] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.457 [2024-07-14 22:25:35.378641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:36.457 22:25:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:36.457 22:25:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:36.457 22:25:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.457 22:25:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:36.715 22:25:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:36.715 22:25:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:36.715 22:25:35 -- bdev/bdev_raid.sh@287 -- # killprocess 119206 00:17:36.715 22:25:35 -- common/autotest_common.sh@926 -- # '[' -z 119206 ']' 00:17:36.715 22:25:35 -- common/autotest_common.sh@930 -- # kill -0 119206 00:17:36.715 22:25:35 -- common/autotest_common.sh@931 -- # uname 00:17:36.715 22:25:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.715 22:25:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119206 00:17:36.715 killing process with pid 119206 00:17:36.715 22:25:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:36.715 22:25:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:36.715 22:25:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119206' 00:17:36.715 22:25:35 -- common/autotest_common.sh@945 -- # kill 119206 00:17:36.715 22:25:35 -- common/autotest_common.sh@950 -- # wait 119206 00:17:36.715 [2024-07-14 22:25:35.667206] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.715 [2024-07-14 22:25:35.667318] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.648 ************************************ 00:17:37.648 END TEST raid_state_function_test_sb 00:17:37.648 ************************************ 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:37.648 00:17:37.648 real 0m12.557s 00:17:37.648 user 0m22.390s 00:17:37.648 sys 0m1.462s 00:17:37.648 22:25:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.648 22:25:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:37.648 22:25:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:37.648 22:25:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:37.648 22:25:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.648 ************************************ 00:17:37.648 START TEST raid_superblock_test 00:17:37.648 ************************************ 00:17:37.648 22:25:36 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:37.648 22:25:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=119637 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119637 /var/tmp/spdk-raid.sock 00:17:37.649 22:25:36 -- common/autotest_common.sh@819 -- # '[' -z 119637 ']' 00:17:37.649 22:25:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:37.649 22:25:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.649 22:25:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.649 22:25:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.649 22:25:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.649 22:25:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.649 [2024-07-14 22:25:36.693787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:37.649 [2024-07-14 22:25:36.693986] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119637 ] 00:17:37.907 [2024-07-14 22:25:36.860361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.165 [2024-07-14 22:25:37.011611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.166 [2024-07-14 22:25:37.172730] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.732 22:25:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.732 22:25:37 -- common/autotest_common.sh@852 -- # return 0 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:38.732 22:25:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:38.990 malloc1 00:17:38.990 22:25:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:38.990 [2024-07-14 22:25:38.010097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:38.990 [2024-07-14 22:25:38.010192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.990 [2024-07-14 22:25:38.010222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:38.990 [2024-07-14 22:25:38.010265] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.990 [2024-07-14 22:25:38.012440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.990 [2024-07-14 22:25:38.012484] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:38.990 pt1 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:38.990 22:25:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:39.248 malloc2 00:17:39.248 22:25:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.507 [2024-07-14 22:25:38.442653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.507 [2024-07-14 22:25:38.442736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.507 [2024-07-14 22:25:38.442776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:39.507 [2024-07-14 22:25:38.442826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.507 [2024-07-14 22:25:38.445003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.507 [2024-07-14 22:25:38.445050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.507 pt2 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.507 22:25:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:39.766 malloc3 00:17:39.766 22:25:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:40.024 [2024-07-14 22:25:38.919379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:40.024 [2024-07-14 22:25:38.919479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.024 [2024-07-14 22:25:38.919517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:40.025 [2024-07-14 22:25:38.919558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.025 [2024-07-14 22:25:38.921814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.025 [2024-07-14 22:25:38.921886] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:40.025 pt3 00:17:40.025 22:25:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:40.025 22:25:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:40.025 22:25:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:40.284 [2024-07-14 22:25:39.119441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.284 [2024-07-14 22:25:39.121289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.284 [2024-07-14 22:25:39.121355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:40.284 [2024-07-14 22:25:39.121535] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:40.284 [2024-07-14 22:25:39.121563] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.284 [2024-07-14 22:25:39.121697] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:40.284 [2024-07-14 22:25:39.122083] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:40.284 [2024-07-14 22:25:39.122107] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:40.284 [2024-07-14 22:25:39.122242] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.284 "name": "raid_bdev1", 00:17:40.284 "uuid": "7737fcc9-d284-4bf6-809e-bd30de7fd327", 00:17:40.284 "strip_size_kb": 64, 00:17:40.284 "state": "online", 00:17:40.284 "raid_level": "concat", 00:17:40.284 "superblock": true, 00:17:40.284 "num_base_bdevs": 3, 00:17:40.284 "num_base_bdevs_discovered": 3, 00:17:40.284 "num_base_bdevs_operational": 3, 00:17:40.284 "base_bdevs_list": [ 00:17:40.284 { 00:17:40.284 "name": "pt1", 00:17:40.284 "uuid": "be64099f-5d7a-5ad1-875f-b5ceab5202b7", 00:17:40.284 "is_configured": true, 00:17:40.284 "data_offset": 2048, 00:17:40.284 "data_size": 63488 00:17:40.284 }, 00:17:40.284 { 00:17:40.284 "name": "pt2", 00:17:40.284 "uuid": "3361ff92-d213-5020-a57a-c8fc7d5e3369", 00:17:40.284 "is_configured": true, 00:17:40.284 "data_offset": 2048, 00:17:40.284 "data_size": 63488 00:17:40.284 }, 00:17:40.284 { 00:17:40.284 "name": "pt3", 00:17:40.284 "uuid": "32d8a31c-5d6b-57cd-94ac-e659980195dd", 00:17:40.284 "is_configured": true, 00:17:40.284 "data_offset": 2048, 00:17:40.284 "data_size": 63488 00:17:40.284 } 00:17:40.284 ] 00:17:40.284 }' 00:17:40.284 22:25:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.284 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:17:41.221 22:25:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.221 22:25:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:41.221 [2024-07-14 22:25:40.159746] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.221 22:25:40 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7737fcc9-d284-4bf6-809e-bd30de7fd327 00:17:41.221 22:25:40 -- bdev/bdev_raid.sh@380 -- # '[' -z 7737fcc9-d284-4bf6-809e-bd30de7fd327 ']' 00:17:41.221 22:25:40 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:41.479 [2024-07-14 22:25:40.399610] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.479 [2024-07-14 22:25:40.399636] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.479 [2024-07-14 22:25:40.399700] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.479 [2024-07-14 22:25:40.399755] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.479 [2024-07-14 22:25:40.399777] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:41.479 22:25:40 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.479 22:25:40 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:41.738 22:25:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:41.738 22:25:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:41.738 22:25:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.738 22:25:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:41.738 22:25:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.738 22:25:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:41.997 22:25:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.997 22:25:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:42.256 22:25:41 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:42.256 22:25:41 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:42.515 22:25:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:42.515 22:25:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:42.515 22:25:41 -- common/autotest_common.sh@640 -- # local es=0 00:17:42.515 22:25:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:42.515 22:25:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.515 22:25:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:42.515 22:25:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.515 22:25:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:42.515 22:25:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.515 22:25:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:42.515 22:25:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.515 22:25:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:42.515 22:25:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:42.788 [2024-07-14 22:25:41.659871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:42.788 [2024-07-14 22:25:41.661565] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:42.788 [2024-07-14 22:25:41.661637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:42.788 [2024-07-14 22:25:41.661690] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:42.788 [2024-07-14 22:25:41.661775] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:42.788 [2024-07-14 22:25:41.661836] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:42.788 [2024-07-14 22:25:41.661901] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.788 [2024-07-14 22:25:41.661915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:42.788 request: 00:17:42.788 { 00:17:42.788 "name": "raid_bdev1", 00:17:42.788 "raid_level": "concat", 00:17:42.788 "base_bdevs": [ 00:17:42.788 "malloc1", 00:17:42.788 "malloc2", 00:17:42.788 "malloc3" 00:17:42.788 ], 00:17:42.788 "superblock": false, 00:17:42.788 "strip_size_kb": 64, 00:17:42.788 "method": "bdev_raid_create", 00:17:42.788 "req_id": 1 00:17:42.788 } 00:17:42.788 Got JSON-RPC error response 00:17:42.788 response: 00:17:42.788 { 00:17:42.788 "code": -17, 00:17:42.788 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:42.788 } 00:17:42.788 22:25:41 -- common/autotest_common.sh@643 -- # es=1 00:17:42.788 22:25:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:42.788 22:25:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:42.788 22:25:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:42.788 22:25:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.788 22:25:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:43.047 22:25:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:43.047 22:25:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:43.047 22:25:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:43.047 [2024-07-14 22:25:42.059867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:43.047 [2024-07-14 22:25:42.059938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.047 [2024-07-14 22:25:42.059971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:43.047 [2024-07-14 22:25:42.059990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.047 [2024-07-14 22:25:42.062136] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.047 [2024-07-14 22:25:42.062185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:43.047 [2024-07-14 22:25:42.062301] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:43.047 [2024-07-14 22:25:42.062347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:43.047 pt1 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.047 22:25:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.306 22:25:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.306 "name": "raid_bdev1", 00:17:43.306 "uuid": "7737fcc9-d284-4bf6-809e-bd30de7fd327", 00:17:43.306 "strip_size_kb": 64, 00:17:43.306 "state": "configuring", 00:17:43.306 "raid_level": "concat", 00:17:43.306 "superblock": true, 00:17:43.306 "num_base_bdevs": 3, 00:17:43.306 "num_base_bdevs_discovered": 1, 00:17:43.306 "num_base_bdevs_operational": 3, 00:17:43.306 "base_bdevs_list": [ 00:17:43.306 { 00:17:43.306 "name": "pt1", 00:17:43.306 "uuid": "be64099f-5d7a-5ad1-875f-b5ceab5202b7", 00:17:43.306 "is_configured": true, 00:17:43.306 "data_offset": 2048, 00:17:43.306 "data_size": 63488 00:17:43.306 }, 00:17:43.306 { 00:17:43.306 "name": null, 00:17:43.306 "uuid": "3361ff92-d213-5020-a57a-c8fc7d5e3369", 00:17:43.306 "is_configured": false, 00:17:43.306 "data_offset": 2048, 00:17:43.306 "data_size": 63488 00:17:43.306 }, 00:17:43.306 { 00:17:43.306 "name": null, 00:17:43.306 "uuid": "32d8a31c-5d6b-57cd-94ac-e659980195dd", 00:17:43.306 "is_configured": false, 00:17:43.306 "data_offset": 2048, 00:17:43.306 "data_size": 63488 00:17:43.306 } 00:17:43.306 ] 00:17:43.306 }' 00:17:43.306 22:25:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.306 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.872 22:25:42 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:43.872 22:25:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:44.130 [2024-07-14 22:25:43.164080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:44.130 [2024-07-14 22:25:43.164159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.130 [2024-07-14 22:25:43.164199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:44.130 [2024-07-14 22:25:43.164219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.130 [2024-07-14 22:25:43.164633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.130 [2024-07-14 22:25:43.164672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:44.130 [2024-07-14 22:25:43.164801] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:44.130 [2024-07-14 22:25:43.164826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:44.130 pt2 00:17:44.130 22:25:43 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:44.388 [2024-07-14 22:25:43.364138] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.388 22:25:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.646 22:25:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.646 "name": "raid_bdev1", 00:17:44.646 "uuid": "7737fcc9-d284-4bf6-809e-bd30de7fd327", 00:17:44.646 "strip_size_kb": 64, 00:17:44.646 "state": "configuring", 00:17:44.646 "raid_level": "concat", 00:17:44.646 "superblock": true, 00:17:44.646 "num_base_bdevs": 3, 00:17:44.646 "num_base_bdevs_discovered": 1, 00:17:44.646 "num_base_bdevs_operational": 3, 00:17:44.646 "base_bdevs_list": [ 00:17:44.646 { 00:17:44.646 "name": "pt1", 00:17:44.646 "uuid": "be64099f-5d7a-5ad1-875f-b5ceab5202b7", 00:17:44.646 "is_configured": true, 00:17:44.646 "data_offset": 2048, 00:17:44.646 "data_size": 63488 00:17:44.646 }, 00:17:44.646 { 00:17:44.646 "name": null, 00:17:44.646 "uuid": "3361ff92-d213-5020-a57a-c8fc7d5e3369", 00:17:44.646 "is_configured": false, 00:17:44.646 "data_offset": 2048, 00:17:44.646 "data_size": 63488 00:17:44.646 }, 00:17:44.646 { 00:17:44.646 "name": null, 00:17:44.646 "uuid": "32d8a31c-5d6b-57cd-94ac-e659980195dd", 00:17:44.646 "is_configured": false, 00:17:44.646 "data_offset": 2048, 00:17:44.646 "data_size": 63488 00:17:44.646 } 00:17:44.646 ] 00:17:44.646 }' 00:17:44.646 22:25:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.646 22:25:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.213 22:25:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:45.213 22:25:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:45.213 22:25:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:45.471 [2024-07-14 22:25:44.496355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:45.471 [2024-07-14 22:25:44.496436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.471 [2024-07-14 22:25:44.496468] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:45.471 [2024-07-14 22:25:44.496502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.471 [2024-07-14 22:25:44.496938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.471 [2024-07-14 22:25:44.496983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:45.471 [2024-07-14 22:25:44.497109] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:45.471 [2024-07-14 22:25:44.497135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.471 pt2 00:17:45.471 22:25:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:45.471 22:25:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:45.471 22:25:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:45.729 [2024-07-14 22:25:44.748408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:45.730 [2024-07-14 22:25:44.748480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.730 [2024-07-14 22:25:44.748509] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:45.730 [2024-07-14 22:25:44.748533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.730 [2024-07-14 22:25:44.748920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.730 [2024-07-14 22:25:44.748967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:45.730 [2024-07-14 22:25:44.749095] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:45.730 [2024-07-14 22:25:44.749124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:45.730 [2024-07-14 22:25:44.749230] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:45.730 [2024-07-14 22:25:44.749248] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:45.730 [2024-07-14 22:25:44.749355] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:45.730 [2024-07-14 22:25:44.749699] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:45.730 [2024-07-14 22:25:44.749721] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:45.730 [2024-07-14 22:25:44.749869] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.730 pt3 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.730 22:25:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.012 22:25:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.012 "name": "raid_bdev1", 00:17:46.012 "uuid": "7737fcc9-d284-4bf6-809e-bd30de7fd327", 00:17:46.012 "strip_size_kb": 64, 00:17:46.012 "state": "online", 00:17:46.012 "raid_level": "concat", 00:17:46.012 "superblock": true, 00:17:46.012 "num_base_bdevs": 3, 00:17:46.012 "num_base_bdevs_discovered": 3, 00:17:46.012 "num_base_bdevs_operational": 3, 00:17:46.012 "base_bdevs_list": [ 00:17:46.012 { 00:17:46.012 "name": "pt1", 00:17:46.012 "uuid": "be64099f-5d7a-5ad1-875f-b5ceab5202b7", 00:17:46.012 "is_configured": true, 00:17:46.012 "data_offset": 2048, 00:17:46.012 "data_size": 63488 00:17:46.012 }, 00:17:46.012 { 00:17:46.012 "name": "pt2", 00:17:46.012 "uuid": "3361ff92-d213-5020-a57a-c8fc7d5e3369", 00:17:46.012 "is_configured": true, 00:17:46.012 "data_offset": 2048, 00:17:46.012 "data_size": 63488 00:17:46.012 }, 00:17:46.012 { 00:17:46.012 "name": "pt3", 00:17:46.012 "uuid": "32d8a31c-5d6b-57cd-94ac-e659980195dd", 00:17:46.012 "is_configured": true, 00:17:46.012 "data_offset": 2048, 00:17:46.012 "data_size": 63488 00:17:46.012 } 00:17:46.012 ] 00:17:46.012 }' 00:17:46.012 22:25:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.012 22:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:46.580 22:25:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:46.580 22:25:45 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:46.838 [2024-07-14 22:25:45.864808] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.838 22:25:45 -- bdev/bdev_raid.sh@430 -- # '[' 7737fcc9-d284-4bf6-809e-bd30de7fd327 '!=' 7737fcc9-d284-4bf6-809e-bd30de7fd327 ']' 00:17:46.838 22:25:45 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:46.838 22:25:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:46.838 22:25:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:46.838 22:25:45 -- bdev/bdev_raid.sh@511 -- # killprocess 119637 00:17:46.838 22:25:45 -- common/autotest_common.sh@926 -- # '[' -z 119637 ']' 00:17:46.838 22:25:45 -- common/autotest_common.sh@930 -- # kill -0 119637 00:17:46.838 22:25:45 -- common/autotest_common.sh@931 -- # uname 00:17:46.838 22:25:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.838 22:25:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119637 00:17:47.096 killing process with pid 119637 00:17:47.096 22:25:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.096 22:25:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.096 22:25:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119637' 00:17:47.096 22:25:45 -- common/autotest_common.sh@945 -- # kill 119637 00:17:47.096 22:25:45 -- common/autotest_common.sh@950 -- # wait 119637 00:17:47.096 [2024-07-14 22:25:45.903205] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.096 [2024-07-14 22:25:45.903269] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.096 [2024-07-14 22:25:45.903325] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.096 [2024-07-14 22:25:45.903334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:47.096 [2024-07-14 22:25:46.091168] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.030 ************************************ 00:17:48.030 END TEST raid_superblock_test 00:17:48.030 ************************************ 00:17:48.030 22:25:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:48.030 00:17:48.030 real 0m10.370s 00:17:48.030 user 0m18.186s 00:17:48.030 sys 0m1.239s 00:17:48.030 22:25:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.030 22:25:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:48.030 22:25:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:48.030 22:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.030 22:25:47 -- common/autotest_common.sh@10 -- # set +x 00:17:48.030 ************************************ 00:17:48.030 START TEST raid_state_function_test 00:17:48.030 ************************************ 00:17:48.030 22:25:47 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=119959 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119959' 00:17:48.030 Process raid pid: 119959 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119959 /var/tmp/spdk-raid.sock 00:17:48.030 22:25:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:48.030 22:25:47 -- common/autotest_common.sh@819 -- # '[' -z 119959 ']' 00:17:48.030 22:25:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:48.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:48.030 22:25:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.030 22:25:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:48.030 22:25:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.030 22:25:47 -- common/autotest_common.sh@10 -- # set +x 00:17:48.289 [2024-07-14 22:25:47.119701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:48.289 [2024-07-14 22:25:47.119903] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.289 [2024-07-14 22:25:47.287613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.548 [2024-07-14 22:25:47.442366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.548 [2024-07-14 22:25:47.606486] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.116 22:25:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.116 22:25:48 -- common/autotest_common.sh@852 -- # return 0 00:17:49.116 22:25:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:49.374 [2024-07-14 22:25:48.289632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.374 [2024-07-14 22:25:48.289708] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.374 [2024-07-14 22:25:48.289720] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.374 [2024-07-14 22:25:48.289739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.374 [2024-07-14 22:25:48.289746] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.374 [2024-07-14 22:25:48.289781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.374 22:25:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.633 22:25:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.633 "name": "Existed_Raid", 00:17:49.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.633 "strip_size_kb": 0, 00:17:49.633 "state": "configuring", 00:17:49.633 "raid_level": "raid1", 00:17:49.633 "superblock": false, 00:17:49.633 "num_base_bdevs": 3, 00:17:49.633 "num_base_bdevs_discovered": 0, 00:17:49.633 "num_base_bdevs_operational": 3, 00:17:49.633 "base_bdevs_list": [ 00:17:49.633 { 00:17:49.633 "name": "BaseBdev1", 00:17:49.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.633 "is_configured": false, 00:17:49.633 "data_offset": 0, 00:17:49.633 "data_size": 0 00:17:49.633 }, 00:17:49.633 { 00:17:49.633 "name": "BaseBdev2", 00:17:49.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.633 "is_configured": false, 00:17:49.633 "data_offset": 0, 00:17:49.633 "data_size": 0 00:17:49.633 }, 00:17:49.633 { 00:17:49.633 "name": "BaseBdev3", 00:17:49.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.633 "is_configured": false, 00:17:49.633 "data_offset": 0, 00:17:49.633 "data_size": 0 00:17:49.633 } 00:17:49.633 ] 00:17:49.633 }' 00:17:49.633 22:25:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.633 22:25:48 -- common/autotest_common.sh@10 -- # set +x 00:17:50.200 22:25:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:50.459 [2024-07-14 22:25:49.281672] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.459 [2024-07-14 22:25:49.281701] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:50.459 22:25:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:50.459 [2024-07-14 22:25:49.465725] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.459 [2024-07-14 22:25:49.465774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.459 [2024-07-14 22:25:49.465784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.459 [2024-07-14 22:25:49.465803] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.459 [2024-07-14 22:25:49.465809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.459 [2024-07-14 22:25:49.465836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.459 22:25:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:50.718 [2024-07-14 22:25:49.748318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.718 BaseBdev1 00:17:50.718 22:25:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:50.718 22:25:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:50.718 22:25:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:50.718 22:25:49 -- common/autotest_common.sh@889 -- # local i 00:17:50.718 22:25:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:50.718 22:25:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:50.718 22:25:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.976 22:25:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:51.234 [ 00:17:51.234 { 00:17:51.234 "name": "BaseBdev1", 00:17:51.234 "aliases": [ 00:17:51.234 "a848d764-9e1c-40cd-adcb-c1deb9435162" 00:17:51.234 ], 00:17:51.234 "product_name": "Malloc disk", 00:17:51.234 "block_size": 512, 00:17:51.234 "num_blocks": 65536, 00:17:51.234 "uuid": "a848d764-9e1c-40cd-adcb-c1deb9435162", 00:17:51.234 "assigned_rate_limits": { 00:17:51.234 "rw_ios_per_sec": 0, 00:17:51.234 "rw_mbytes_per_sec": 0, 00:17:51.234 "r_mbytes_per_sec": 0, 00:17:51.234 "w_mbytes_per_sec": 0 00:17:51.234 }, 00:17:51.234 "claimed": true, 00:17:51.234 "claim_type": "exclusive_write", 00:17:51.234 "zoned": false, 00:17:51.234 "supported_io_types": { 00:17:51.234 "read": true, 00:17:51.234 "write": true, 00:17:51.234 "unmap": true, 00:17:51.234 "write_zeroes": true, 00:17:51.234 "flush": true, 00:17:51.234 "reset": true, 00:17:51.234 "compare": false, 00:17:51.234 "compare_and_write": false, 00:17:51.234 "abort": true, 00:17:51.234 "nvme_admin": false, 00:17:51.234 "nvme_io": false 00:17:51.234 }, 00:17:51.234 "memory_domains": [ 00:17:51.234 { 00:17:51.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.234 "dma_device_type": 2 00:17:51.235 } 00:17:51.235 ], 00:17:51.235 "driver_specific": {} 00:17:51.235 } 00:17:51.235 ] 00:17:51.235 22:25:50 -- common/autotest_common.sh@895 -- # return 0 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.235 22:25:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.493 22:25:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.493 "name": "Existed_Raid", 00:17:51.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.493 "strip_size_kb": 0, 00:17:51.493 "state": "configuring", 00:17:51.493 "raid_level": "raid1", 00:17:51.493 "superblock": false, 00:17:51.493 "num_base_bdevs": 3, 00:17:51.493 "num_base_bdevs_discovered": 1, 00:17:51.493 "num_base_bdevs_operational": 3, 00:17:51.494 "base_bdevs_list": [ 00:17:51.494 { 00:17:51.494 "name": "BaseBdev1", 00:17:51.494 "uuid": "a848d764-9e1c-40cd-adcb-c1deb9435162", 00:17:51.494 "is_configured": true, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 65536 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": "BaseBdev2", 00:17:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.494 "is_configured": false, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 0 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": "BaseBdev3", 00:17:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.494 "is_configured": false, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 0 00:17:51.494 } 00:17:51.494 ] 00:17:51.494 }' 00:17:51.494 22:25:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.494 22:25:50 -- common/autotest_common.sh@10 -- # set +x 00:17:52.061 22:25:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:52.320 [2024-07-14 22:25:51.316595] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.320 [2024-07-14 22:25:51.316635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:52.320 22:25:51 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:52.320 22:25:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:52.579 [2024-07-14 22:25:51.552667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.579 [2024-07-14 22:25:51.554155] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.579 [2024-07-14 22:25:51.554206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.579 [2024-07-14 22:25:51.554231] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.579 [2024-07-14 22:25:51.554253] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.579 22:25:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.839 22:25:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.839 "name": "Existed_Raid", 00:17:52.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.839 "strip_size_kb": 0, 00:17:52.839 "state": "configuring", 00:17:52.839 "raid_level": "raid1", 00:17:52.839 "superblock": false, 00:17:52.839 "num_base_bdevs": 3, 00:17:52.839 "num_base_bdevs_discovered": 1, 00:17:52.839 "num_base_bdevs_operational": 3, 00:17:52.839 "base_bdevs_list": [ 00:17:52.839 { 00:17:52.839 "name": "BaseBdev1", 00:17:52.839 "uuid": "a848d764-9e1c-40cd-adcb-c1deb9435162", 00:17:52.839 "is_configured": true, 00:17:52.839 "data_offset": 0, 00:17:52.839 "data_size": 65536 00:17:52.839 }, 00:17:52.839 { 00:17:52.839 "name": "BaseBdev2", 00:17:52.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.839 "is_configured": false, 00:17:52.839 "data_offset": 0, 00:17:52.839 "data_size": 0 00:17:52.839 }, 00:17:52.839 { 00:17:52.839 "name": "BaseBdev3", 00:17:52.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.839 "is_configured": false, 00:17:52.839 "data_offset": 0, 00:17:52.839 "data_size": 0 00:17:52.839 } 00:17:52.839 ] 00:17:52.839 }' 00:17:52.839 22:25:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.839 22:25:51 -- common/autotest_common.sh@10 -- # set +x 00:17:53.406 22:25:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:53.972 [2024-07-14 22:25:52.740558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.972 BaseBdev2 00:17:53.972 22:25:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:53.972 22:25:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:53.972 22:25:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:53.972 22:25:52 -- common/autotest_common.sh@889 -- # local i 00:17:53.972 22:25:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:53.972 22:25:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:53.972 22:25:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.972 22:25:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.231 [ 00:17:54.231 { 00:17:54.231 "name": "BaseBdev2", 00:17:54.231 "aliases": [ 00:17:54.231 "4e1640d2-9979-4e4a-9c16-5cfbcdfb9596" 00:17:54.231 ], 00:17:54.231 "product_name": "Malloc disk", 00:17:54.231 "block_size": 512, 00:17:54.231 "num_blocks": 65536, 00:17:54.231 "uuid": "4e1640d2-9979-4e4a-9c16-5cfbcdfb9596", 00:17:54.231 "assigned_rate_limits": { 00:17:54.231 "rw_ios_per_sec": 0, 00:17:54.231 "rw_mbytes_per_sec": 0, 00:17:54.231 "r_mbytes_per_sec": 0, 00:17:54.231 "w_mbytes_per_sec": 0 00:17:54.231 }, 00:17:54.231 "claimed": true, 00:17:54.231 "claim_type": "exclusive_write", 00:17:54.231 "zoned": false, 00:17:54.231 "supported_io_types": { 00:17:54.231 "read": true, 00:17:54.231 "write": true, 00:17:54.231 "unmap": true, 00:17:54.231 "write_zeroes": true, 00:17:54.231 "flush": true, 00:17:54.231 "reset": true, 00:17:54.231 "compare": false, 00:17:54.231 "compare_and_write": false, 00:17:54.231 "abort": true, 00:17:54.231 "nvme_admin": false, 00:17:54.231 "nvme_io": false 00:17:54.231 }, 00:17:54.231 "memory_domains": [ 00:17:54.231 { 00:17:54.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.231 "dma_device_type": 2 00:17:54.231 } 00:17:54.231 ], 00:17:54.231 "driver_specific": {} 00:17:54.231 } 00:17:54.231 ] 00:17:54.231 22:25:53 -- common/autotest_common.sh@895 -- # return 0 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.231 22:25:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.488 22:25:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.488 "name": "Existed_Raid", 00:17:54.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.488 "strip_size_kb": 0, 00:17:54.488 "state": "configuring", 00:17:54.488 "raid_level": "raid1", 00:17:54.488 "superblock": false, 00:17:54.488 "num_base_bdevs": 3, 00:17:54.488 "num_base_bdevs_discovered": 2, 00:17:54.488 "num_base_bdevs_operational": 3, 00:17:54.488 "base_bdevs_list": [ 00:17:54.488 { 00:17:54.488 "name": "BaseBdev1", 00:17:54.488 "uuid": "a848d764-9e1c-40cd-adcb-c1deb9435162", 00:17:54.488 "is_configured": true, 00:17:54.488 "data_offset": 0, 00:17:54.488 "data_size": 65536 00:17:54.488 }, 00:17:54.488 { 00:17:54.488 "name": "BaseBdev2", 00:17:54.488 "uuid": "4e1640d2-9979-4e4a-9c16-5cfbcdfb9596", 00:17:54.488 "is_configured": true, 00:17:54.488 "data_offset": 0, 00:17:54.488 "data_size": 65536 00:17:54.488 }, 00:17:54.488 { 00:17:54.488 "name": "BaseBdev3", 00:17:54.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.488 "is_configured": false, 00:17:54.488 "data_offset": 0, 00:17:54.488 "data_size": 0 00:17:54.488 } 00:17:54.488 ] 00:17:54.488 }' 00:17:54.488 22:25:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.488 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:17:55.052 22:25:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:55.310 [2024-07-14 22:25:54.316527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.310 [2024-07-14 22:25:54.316573] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:55.310 [2024-07-14 22:25:54.316582] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:55.310 [2024-07-14 22:25:54.316697] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:55.310 [2024-07-14 22:25:54.317025] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:55.310 [2024-07-14 22:25:54.317046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:55.310 BaseBdev3 00:17:55.310 [2024-07-14 22:25:54.317261] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.310 22:25:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:55.310 22:25:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:55.310 22:25:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:55.310 22:25:54 -- common/autotest_common.sh@889 -- # local i 00:17:55.310 22:25:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:55.310 22:25:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:55.310 22:25:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.568 22:25:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:55.827 [ 00:17:55.827 { 00:17:55.827 "name": "BaseBdev3", 00:17:55.827 "aliases": [ 00:17:55.827 "8ddede40-b9cc-4591-b9be-5b041d90d4d2" 00:17:55.827 ], 00:17:55.827 "product_name": "Malloc disk", 00:17:55.827 "block_size": 512, 00:17:55.827 "num_blocks": 65536, 00:17:55.827 "uuid": "8ddede40-b9cc-4591-b9be-5b041d90d4d2", 00:17:55.827 "assigned_rate_limits": { 00:17:55.827 "rw_ios_per_sec": 0, 00:17:55.827 "rw_mbytes_per_sec": 0, 00:17:55.827 "r_mbytes_per_sec": 0, 00:17:55.827 "w_mbytes_per_sec": 0 00:17:55.827 }, 00:17:55.827 "claimed": true, 00:17:55.827 "claim_type": "exclusive_write", 00:17:55.827 "zoned": false, 00:17:55.827 "supported_io_types": { 00:17:55.827 "read": true, 00:17:55.827 "write": true, 00:17:55.827 "unmap": true, 00:17:55.827 "write_zeroes": true, 00:17:55.827 "flush": true, 00:17:55.827 "reset": true, 00:17:55.827 "compare": false, 00:17:55.827 "compare_and_write": false, 00:17:55.827 "abort": true, 00:17:55.827 "nvme_admin": false, 00:17:55.827 "nvme_io": false 00:17:55.827 }, 00:17:55.827 "memory_domains": [ 00:17:55.827 { 00:17:55.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.827 "dma_device_type": 2 00:17:55.827 } 00:17:55.827 ], 00:17:55.827 "driver_specific": {} 00:17:55.827 } 00:17:55.827 ] 00:17:55.827 22:25:54 -- common/autotest_common.sh@895 -- # return 0 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.827 22:25:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.085 22:25:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.085 "name": "Existed_Raid", 00:17:56.085 "uuid": "f129e473-9516-4653-9a7a-3e5d21bf4399", 00:17:56.085 "strip_size_kb": 0, 00:17:56.085 "state": "online", 00:17:56.085 "raid_level": "raid1", 00:17:56.085 "superblock": false, 00:17:56.085 "num_base_bdevs": 3, 00:17:56.085 "num_base_bdevs_discovered": 3, 00:17:56.085 "num_base_bdevs_operational": 3, 00:17:56.085 "base_bdevs_list": [ 00:17:56.085 { 00:17:56.085 "name": "BaseBdev1", 00:17:56.085 "uuid": "a848d764-9e1c-40cd-adcb-c1deb9435162", 00:17:56.085 "is_configured": true, 00:17:56.085 "data_offset": 0, 00:17:56.085 "data_size": 65536 00:17:56.085 }, 00:17:56.085 { 00:17:56.085 "name": "BaseBdev2", 00:17:56.085 "uuid": "4e1640d2-9979-4e4a-9c16-5cfbcdfb9596", 00:17:56.085 "is_configured": true, 00:17:56.085 "data_offset": 0, 00:17:56.085 "data_size": 65536 00:17:56.085 }, 00:17:56.085 { 00:17:56.085 "name": "BaseBdev3", 00:17:56.085 "uuid": "8ddede40-b9cc-4591-b9be-5b041d90d4d2", 00:17:56.085 "is_configured": true, 00:17:56.085 "data_offset": 0, 00:17:56.085 "data_size": 65536 00:17:56.085 } 00:17:56.085 ] 00:17:56.085 }' 00:17:56.085 22:25:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.085 22:25:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.652 22:25:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:56.911 [2024-07-14 22:25:55.780863] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.911 22:25:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.170 22:25:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.170 "name": "Existed_Raid", 00:17:57.170 "uuid": "f129e473-9516-4653-9a7a-3e5d21bf4399", 00:17:57.170 "strip_size_kb": 0, 00:17:57.170 "state": "online", 00:17:57.170 "raid_level": "raid1", 00:17:57.170 "superblock": false, 00:17:57.170 "num_base_bdevs": 3, 00:17:57.170 "num_base_bdevs_discovered": 2, 00:17:57.170 "num_base_bdevs_operational": 2, 00:17:57.170 "base_bdevs_list": [ 00:17:57.170 { 00:17:57.170 "name": null, 00:17:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.170 "is_configured": false, 00:17:57.170 "data_offset": 0, 00:17:57.170 "data_size": 65536 00:17:57.170 }, 00:17:57.170 { 00:17:57.170 "name": "BaseBdev2", 00:17:57.170 "uuid": "4e1640d2-9979-4e4a-9c16-5cfbcdfb9596", 00:17:57.170 "is_configured": true, 00:17:57.170 "data_offset": 0, 00:17:57.170 "data_size": 65536 00:17:57.170 }, 00:17:57.170 { 00:17:57.170 "name": "BaseBdev3", 00:17:57.170 "uuid": "8ddede40-b9cc-4591-b9be-5b041d90d4d2", 00:17:57.170 "is_configured": true, 00:17:57.170 "data_offset": 0, 00:17:57.170 "data_size": 65536 00:17:57.170 } 00:17:57.170 ] 00:17:57.170 }' 00:17:57.170 22:25:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.170 22:25:56 -- common/autotest_common.sh@10 -- # set +x 00:17:57.737 22:25:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:57.737 22:25:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:57.737 22:25:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.738 22:25:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:58.055 22:25:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:58.055 22:25:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:58.055 22:25:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:58.055 [2024-07-14 22:25:57.115390] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:58.314 22:25:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:58.314 22:25:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:58.314 22:25:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.314 22:25:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:58.573 22:25:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:58.573 22:25:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:58.573 22:25:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:58.573 [2024-07-14 22:25:57.622041] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:58.573 [2024-07-14 22:25:57.622071] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.573 [2024-07-14 22:25:57.622131] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.832 [2024-07-14 22:25:57.684645] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.832 [2024-07-14 22:25:57.684675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:58.832 22:25:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:58.832 22:25:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:58.832 22:25:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.832 22:25:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:59.091 22:25:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:59.091 22:25:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:59.091 22:25:57 -- bdev/bdev_raid.sh@287 -- # killprocess 119959 00:17:59.091 22:25:57 -- common/autotest_common.sh@926 -- # '[' -z 119959 ']' 00:17:59.091 22:25:57 -- common/autotest_common.sh@930 -- # kill -0 119959 00:17:59.091 22:25:57 -- common/autotest_common.sh@931 -- # uname 00:17:59.091 22:25:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.091 22:25:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119959 00:17:59.091 killing process with pid 119959 00:17:59.091 22:25:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.091 22:25:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.091 22:25:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119959' 00:17:59.091 22:25:57 -- common/autotest_common.sh@945 -- # kill 119959 00:17:59.091 22:25:57 -- common/autotest_common.sh@950 -- # wait 119959 00:17:59.091 [2024-07-14 22:25:57.985191] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.091 [2024-07-14 22:25:57.985279] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.028 ************************************ 00:18:00.028 END TEST raid_state_function_test 00:18:00.028 ************************************ 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:00.028 00:18:00.028 real 0m11.839s 00:18:00.028 user 0m21.041s 00:18:00.028 sys 0m1.465s 00:18:00.028 22:25:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.028 22:25:58 -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:18:00.028 22:25:58 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:00.028 22:25:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:00.028 22:25:58 -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 ************************************ 00:18:00.028 START TEST raid_state_function_test_sb 00:18:00.028 ************************************ 00:18:00.028 22:25:58 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@226 -- # raid_pid=120346 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120346' 00:18:00.028 Process raid pid: 120346 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120346 /var/tmp/spdk-raid.sock 00:18:00.028 22:25:58 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:00.028 22:25:58 -- common/autotest_common.sh@819 -- # '[' -z 120346 ']' 00:18:00.028 22:25:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:00.028 22:25:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:00.028 22:25:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:00.028 22:25:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.028 22:25:58 -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 [2024-07-14 22:25:59.026677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:00.028 [2024-07-14 22:25:59.026908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.287 [2024-07-14 22:25:59.193544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.545 [2024-07-14 22:25:59.351512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.545 [2024-07-14 22:25:59.517184] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.112 22:25:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.112 22:25:59 -- common/autotest_common.sh@852 -- # return 0 00:18:01.112 22:25:59 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:01.371 [2024-07-14 22:26:00.211059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.371 [2024-07-14 22:26:00.211139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.371 [2024-07-14 22:26:00.211151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.371 [2024-07-14 22:26:00.211170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.371 [2024-07-14 22:26:00.211176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.371 [2024-07-14 22:26:00.211213] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.371 22:26:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.628 22:26:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.628 "name": "Existed_Raid", 00:18:01.628 "uuid": "c9038716-3335-4a76-a65d-bc42b5b8d764", 00:18:01.628 "strip_size_kb": 0, 00:18:01.628 "state": "configuring", 00:18:01.628 "raid_level": "raid1", 00:18:01.628 "superblock": true, 00:18:01.628 "num_base_bdevs": 3, 00:18:01.628 "num_base_bdevs_discovered": 0, 00:18:01.628 "num_base_bdevs_operational": 3, 00:18:01.628 "base_bdevs_list": [ 00:18:01.628 { 00:18:01.628 "name": "BaseBdev1", 00:18:01.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.628 "is_configured": false, 00:18:01.628 "data_offset": 0, 00:18:01.628 "data_size": 0 00:18:01.628 }, 00:18:01.628 { 00:18:01.628 "name": "BaseBdev2", 00:18:01.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.628 "is_configured": false, 00:18:01.628 "data_offset": 0, 00:18:01.628 "data_size": 0 00:18:01.628 }, 00:18:01.628 { 00:18:01.628 "name": "BaseBdev3", 00:18:01.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.628 "is_configured": false, 00:18:01.628 "data_offset": 0, 00:18:01.628 "data_size": 0 00:18:01.628 } 00:18:01.628 ] 00:18:01.628 }' 00:18:01.628 22:26:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.628 22:26:00 -- common/autotest_common.sh@10 -- # set +x 00:18:02.194 22:26:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.451 [2024-07-14 22:26:01.263131] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.451 [2024-07-14 22:26:01.263162] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:02.451 22:26:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:02.451 [2024-07-14 22:26:01.511210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.451 [2024-07-14 22:26:01.511271] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.451 [2024-07-14 22:26:01.511282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.451 [2024-07-14 22:26:01.511297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.451 [2024-07-14 22:26:01.511303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.451 [2024-07-14 22:26:01.511328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.708 22:26:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:02.708 [2024-07-14 22:26:01.724398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.708 BaseBdev1 00:18:02.708 22:26:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:02.708 22:26:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:02.708 22:26:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:02.708 22:26:01 -- common/autotest_common.sh@889 -- # local i 00:18:02.708 22:26:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:02.708 22:26:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:02.708 22:26:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.005 22:26:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.262 [ 00:18:03.262 { 00:18:03.262 "name": "BaseBdev1", 00:18:03.262 "aliases": [ 00:18:03.262 "97bf2c20-9d48-4dec-815c-9c361a7a8fbb" 00:18:03.262 ], 00:18:03.262 "product_name": "Malloc disk", 00:18:03.262 "block_size": 512, 00:18:03.262 "num_blocks": 65536, 00:18:03.262 "uuid": "97bf2c20-9d48-4dec-815c-9c361a7a8fbb", 00:18:03.262 "assigned_rate_limits": { 00:18:03.262 "rw_ios_per_sec": 0, 00:18:03.262 "rw_mbytes_per_sec": 0, 00:18:03.262 "r_mbytes_per_sec": 0, 00:18:03.262 "w_mbytes_per_sec": 0 00:18:03.262 }, 00:18:03.262 "claimed": true, 00:18:03.262 "claim_type": "exclusive_write", 00:18:03.262 "zoned": false, 00:18:03.262 "supported_io_types": { 00:18:03.262 "read": true, 00:18:03.262 "write": true, 00:18:03.262 "unmap": true, 00:18:03.262 "write_zeroes": true, 00:18:03.262 "flush": true, 00:18:03.262 "reset": true, 00:18:03.262 "compare": false, 00:18:03.262 "compare_and_write": false, 00:18:03.262 "abort": true, 00:18:03.262 "nvme_admin": false, 00:18:03.262 "nvme_io": false 00:18:03.262 }, 00:18:03.262 "memory_domains": [ 00:18:03.262 { 00:18:03.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.262 "dma_device_type": 2 00:18:03.262 } 00:18:03.262 ], 00:18:03.262 "driver_specific": {} 00:18:03.262 } 00:18:03.262 ] 00:18:03.262 22:26:02 -- common/autotest_common.sh@895 -- # return 0 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.262 22:26:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.519 22:26:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.519 "name": "Existed_Raid", 00:18:03.519 "uuid": "f7baff8d-d47f-4d69-a7ed-6e4fe148d98d", 00:18:03.519 "strip_size_kb": 0, 00:18:03.519 "state": "configuring", 00:18:03.519 "raid_level": "raid1", 00:18:03.519 "superblock": true, 00:18:03.519 "num_base_bdevs": 3, 00:18:03.519 "num_base_bdevs_discovered": 1, 00:18:03.519 "num_base_bdevs_operational": 3, 00:18:03.519 "base_bdevs_list": [ 00:18:03.519 { 00:18:03.519 "name": "BaseBdev1", 00:18:03.519 "uuid": "97bf2c20-9d48-4dec-815c-9c361a7a8fbb", 00:18:03.519 "is_configured": true, 00:18:03.519 "data_offset": 2048, 00:18:03.519 "data_size": 63488 00:18:03.519 }, 00:18:03.519 { 00:18:03.519 "name": "BaseBdev2", 00:18:03.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.519 "is_configured": false, 00:18:03.519 "data_offset": 0, 00:18:03.519 "data_size": 0 00:18:03.519 }, 00:18:03.519 { 00:18:03.519 "name": "BaseBdev3", 00:18:03.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.519 "is_configured": false, 00:18:03.519 "data_offset": 0, 00:18:03.519 "data_size": 0 00:18:03.519 } 00:18:03.519 ] 00:18:03.519 }' 00:18:03.519 22:26:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.519 22:26:02 -- common/autotest_common.sh@10 -- # set +x 00:18:04.084 22:26:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.342 [2024-07-14 22:26:03.248669] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.342 [2024-07-14 22:26:03.248708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:04.342 22:26:03 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:04.342 22:26:03 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:04.599 22:26:03 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.856 BaseBdev1 00:18:04.856 22:26:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:04.856 22:26:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:04.856 22:26:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:04.856 22:26:03 -- common/autotest_common.sh@889 -- # local i 00:18:04.856 22:26:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:04.856 22:26:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:04.856 22:26:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.115 22:26:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.372 [ 00:18:05.373 { 00:18:05.373 "name": "BaseBdev1", 00:18:05.373 "aliases": [ 00:18:05.373 "6f11943b-6985-417a-a8d2-2b15c1b78d8b" 00:18:05.373 ], 00:18:05.373 "product_name": "Malloc disk", 00:18:05.373 "block_size": 512, 00:18:05.373 "num_blocks": 65536, 00:18:05.373 "uuid": "6f11943b-6985-417a-a8d2-2b15c1b78d8b", 00:18:05.373 "assigned_rate_limits": { 00:18:05.373 "rw_ios_per_sec": 0, 00:18:05.373 "rw_mbytes_per_sec": 0, 00:18:05.373 "r_mbytes_per_sec": 0, 00:18:05.373 "w_mbytes_per_sec": 0 00:18:05.373 }, 00:18:05.373 "claimed": false, 00:18:05.373 "zoned": false, 00:18:05.373 "supported_io_types": { 00:18:05.373 "read": true, 00:18:05.373 "write": true, 00:18:05.373 "unmap": true, 00:18:05.373 "write_zeroes": true, 00:18:05.373 "flush": true, 00:18:05.373 "reset": true, 00:18:05.373 "compare": false, 00:18:05.373 "compare_and_write": false, 00:18:05.373 "abort": true, 00:18:05.373 "nvme_admin": false, 00:18:05.373 "nvme_io": false 00:18:05.373 }, 00:18:05.373 "memory_domains": [ 00:18:05.373 { 00:18:05.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.373 "dma_device_type": 2 00:18:05.373 } 00:18:05.373 ], 00:18:05.373 "driver_specific": {} 00:18:05.373 } 00:18:05.373 ] 00:18:05.373 22:26:04 -- common/autotest_common.sh@895 -- # return 0 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:05.373 [2024-07-14 22:26:04.378921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.373 [2024-07-14 22:26:04.380644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.373 [2024-07-14 22:26:04.380695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.373 [2024-07-14 22:26:04.380706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.373 [2024-07-14 22:26:04.380729] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.373 22:26:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.630 22:26:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.630 "name": "Existed_Raid", 00:18:05.630 "uuid": "b16bd54b-960a-454f-b917-2f186ef769a5", 00:18:05.630 "strip_size_kb": 0, 00:18:05.630 "state": "configuring", 00:18:05.630 "raid_level": "raid1", 00:18:05.630 "superblock": true, 00:18:05.630 "num_base_bdevs": 3, 00:18:05.630 "num_base_bdevs_discovered": 1, 00:18:05.630 "num_base_bdevs_operational": 3, 00:18:05.630 "base_bdevs_list": [ 00:18:05.630 { 00:18:05.630 "name": "BaseBdev1", 00:18:05.630 "uuid": "6f11943b-6985-417a-a8d2-2b15c1b78d8b", 00:18:05.630 "is_configured": true, 00:18:05.630 "data_offset": 2048, 00:18:05.630 "data_size": 63488 00:18:05.630 }, 00:18:05.630 { 00:18:05.630 "name": "BaseBdev2", 00:18:05.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.630 "is_configured": false, 00:18:05.630 "data_offset": 0, 00:18:05.630 "data_size": 0 00:18:05.630 }, 00:18:05.630 { 00:18:05.630 "name": "BaseBdev3", 00:18:05.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.630 "is_configured": false, 00:18:05.630 "data_offset": 0, 00:18:05.630 "data_size": 0 00:18:05.630 } 00:18:05.630 ] 00:18:05.630 }' 00:18:05.630 22:26:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.630 22:26:04 -- common/autotest_common.sh@10 -- # set +x 00:18:06.196 22:26:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.453 [2024-07-14 22:26:05.433008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.453 BaseBdev2 00:18:06.453 22:26:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:06.453 22:26:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:06.453 22:26:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:06.453 22:26:05 -- common/autotest_common.sh@889 -- # local i 00:18:06.453 22:26:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:06.453 22:26:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:06.453 22:26:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.711 22:26:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.971 [ 00:18:06.971 { 00:18:06.971 "name": "BaseBdev2", 00:18:06.971 "aliases": [ 00:18:06.971 "ee00ce01-2a90-45bd-8c33-58d9f5b51c5b" 00:18:06.971 ], 00:18:06.971 "product_name": "Malloc disk", 00:18:06.971 "block_size": 512, 00:18:06.971 "num_blocks": 65536, 00:18:06.971 "uuid": "ee00ce01-2a90-45bd-8c33-58d9f5b51c5b", 00:18:06.971 "assigned_rate_limits": { 00:18:06.971 "rw_ios_per_sec": 0, 00:18:06.971 "rw_mbytes_per_sec": 0, 00:18:06.971 "r_mbytes_per_sec": 0, 00:18:06.971 "w_mbytes_per_sec": 0 00:18:06.971 }, 00:18:06.971 "claimed": true, 00:18:06.971 "claim_type": "exclusive_write", 00:18:06.971 "zoned": false, 00:18:06.971 "supported_io_types": { 00:18:06.971 "read": true, 00:18:06.971 "write": true, 00:18:06.971 "unmap": true, 00:18:06.971 "write_zeroes": true, 00:18:06.971 "flush": true, 00:18:06.971 "reset": true, 00:18:06.971 "compare": false, 00:18:06.971 "compare_and_write": false, 00:18:06.971 "abort": true, 00:18:06.971 "nvme_admin": false, 00:18:06.971 "nvme_io": false 00:18:06.971 }, 00:18:06.971 "memory_domains": [ 00:18:06.971 { 00:18:06.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.971 "dma_device_type": 2 00:18:06.971 } 00:18:06.971 ], 00:18:06.971 "driver_specific": {} 00:18:06.971 } 00:18:06.971 ] 00:18:06.971 22:26:05 -- common/autotest_common.sh@895 -- # return 0 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.971 22:26:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.971 22:26:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.971 "name": "Existed_Raid", 00:18:06.971 "uuid": "b16bd54b-960a-454f-b917-2f186ef769a5", 00:18:06.971 "strip_size_kb": 0, 00:18:06.971 "state": "configuring", 00:18:06.971 "raid_level": "raid1", 00:18:06.971 "superblock": true, 00:18:06.971 "num_base_bdevs": 3, 00:18:06.971 "num_base_bdevs_discovered": 2, 00:18:06.971 "num_base_bdevs_operational": 3, 00:18:06.971 "base_bdevs_list": [ 00:18:06.971 { 00:18:06.971 "name": "BaseBdev1", 00:18:06.971 "uuid": "6f11943b-6985-417a-a8d2-2b15c1b78d8b", 00:18:06.971 "is_configured": true, 00:18:06.971 "data_offset": 2048, 00:18:06.971 "data_size": 63488 00:18:06.971 }, 00:18:06.971 { 00:18:06.971 "name": "BaseBdev2", 00:18:06.971 "uuid": "ee00ce01-2a90-45bd-8c33-58d9f5b51c5b", 00:18:06.971 "is_configured": true, 00:18:06.971 "data_offset": 2048, 00:18:06.971 "data_size": 63488 00:18:06.971 }, 00:18:06.971 { 00:18:06.971 "name": "BaseBdev3", 00:18:06.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.971 "is_configured": false, 00:18:06.971 "data_offset": 0, 00:18:06.971 "data_size": 0 00:18:06.971 } 00:18:06.971 ] 00:18:06.971 }' 00:18:06.971 22:26:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.971 22:26:06 -- common/autotest_common.sh@10 -- # set +x 00:18:07.906 22:26:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:07.906 [2024-07-14 22:26:06.928650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:07.906 [2024-07-14 22:26:06.928900] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:07.906 [2024-07-14 22:26:06.928931] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:07.906 [2024-07-14 22:26:06.929068] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:07.906 BaseBdev3 00:18:07.906 [2024-07-14 22:26:06.929419] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:07.906 [2024-07-14 22:26:06.929432] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:07.906 [2024-07-14 22:26:06.929592] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.906 22:26:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:07.906 22:26:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:07.906 22:26:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:07.906 22:26:06 -- common/autotest_common.sh@889 -- # local i 00:18:07.906 22:26:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:07.906 22:26:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:07.906 22:26:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.164 22:26:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.422 [ 00:18:08.422 { 00:18:08.422 "name": "BaseBdev3", 00:18:08.422 "aliases": [ 00:18:08.422 "592b2d1e-4888-4313-b7c5-4e114f9bcaa9" 00:18:08.422 ], 00:18:08.422 "product_name": "Malloc disk", 00:18:08.422 "block_size": 512, 00:18:08.422 "num_blocks": 65536, 00:18:08.422 "uuid": "592b2d1e-4888-4313-b7c5-4e114f9bcaa9", 00:18:08.422 "assigned_rate_limits": { 00:18:08.422 "rw_ios_per_sec": 0, 00:18:08.422 "rw_mbytes_per_sec": 0, 00:18:08.422 "r_mbytes_per_sec": 0, 00:18:08.422 "w_mbytes_per_sec": 0 00:18:08.422 }, 00:18:08.422 "claimed": true, 00:18:08.422 "claim_type": "exclusive_write", 00:18:08.422 "zoned": false, 00:18:08.422 "supported_io_types": { 00:18:08.422 "read": true, 00:18:08.422 "write": true, 00:18:08.422 "unmap": true, 00:18:08.422 "write_zeroes": true, 00:18:08.422 "flush": true, 00:18:08.422 "reset": true, 00:18:08.422 "compare": false, 00:18:08.422 "compare_and_write": false, 00:18:08.422 "abort": true, 00:18:08.422 "nvme_admin": false, 00:18:08.422 "nvme_io": false 00:18:08.422 }, 00:18:08.422 "memory_domains": [ 00:18:08.422 { 00:18:08.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.422 "dma_device_type": 2 00:18:08.422 } 00:18:08.422 ], 00:18:08.423 "driver_specific": {} 00:18:08.423 } 00:18:08.423 ] 00:18:08.423 22:26:07 -- common/autotest_common.sh@895 -- # return 0 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.423 22:26:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.681 22:26:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.681 "name": "Existed_Raid", 00:18:08.681 "uuid": "b16bd54b-960a-454f-b917-2f186ef769a5", 00:18:08.681 "strip_size_kb": 0, 00:18:08.681 "state": "online", 00:18:08.681 "raid_level": "raid1", 00:18:08.681 "superblock": true, 00:18:08.681 "num_base_bdevs": 3, 00:18:08.681 "num_base_bdevs_discovered": 3, 00:18:08.681 "num_base_bdevs_operational": 3, 00:18:08.681 "base_bdevs_list": [ 00:18:08.681 { 00:18:08.681 "name": "BaseBdev1", 00:18:08.681 "uuid": "6f11943b-6985-417a-a8d2-2b15c1b78d8b", 00:18:08.681 "is_configured": true, 00:18:08.681 "data_offset": 2048, 00:18:08.681 "data_size": 63488 00:18:08.681 }, 00:18:08.681 { 00:18:08.681 "name": "BaseBdev2", 00:18:08.681 "uuid": "ee00ce01-2a90-45bd-8c33-58d9f5b51c5b", 00:18:08.681 "is_configured": true, 00:18:08.681 "data_offset": 2048, 00:18:08.681 "data_size": 63488 00:18:08.681 }, 00:18:08.681 { 00:18:08.681 "name": "BaseBdev3", 00:18:08.681 "uuid": "592b2d1e-4888-4313-b7c5-4e114f9bcaa9", 00:18:08.681 "is_configured": true, 00:18:08.681 "data_offset": 2048, 00:18:08.681 "data_size": 63488 00:18:08.681 } 00:18:08.681 ] 00:18:08.681 }' 00:18:08.681 22:26:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.681 22:26:07 -- common/autotest_common.sh@10 -- # set +x 00:18:09.248 22:26:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:09.506 [2024-07-14 22:26:08.413002] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.506 22:26:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.765 22:26:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.765 "name": "Existed_Raid", 00:18:09.765 "uuid": "b16bd54b-960a-454f-b917-2f186ef769a5", 00:18:09.765 "strip_size_kb": 0, 00:18:09.765 "state": "online", 00:18:09.765 "raid_level": "raid1", 00:18:09.765 "superblock": true, 00:18:09.765 "num_base_bdevs": 3, 00:18:09.765 "num_base_bdevs_discovered": 2, 00:18:09.765 "num_base_bdevs_operational": 2, 00:18:09.765 "base_bdevs_list": [ 00:18:09.765 { 00:18:09.765 "name": null, 00:18:09.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.765 "is_configured": false, 00:18:09.765 "data_offset": 2048, 00:18:09.765 "data_size": 63488 00:18:09.765 }, 00:18:09.765 { 00:18:09.765 "name": "BaseBdev2", 00:18:09.765 "uuid": "ee00ce01-2a90-45bd-8c33-58d9f5b51c5b", 00:18:09.765 "is_configured": true, 00:18:09.765 "data_offset": 2048, 00:18:09.765 "data_size": 63488 00:18:09.765 }, 00:18:09.765 { 00:18:09.765 "name": "BaseBdev3", 00:18:09.765 "uuid": "592b2d1e-4888-4313-b7c5-4e114f9bcaa9", 00:18:09.765 "is_configured": true, 00:18:09.765 "data_offset": 2048, 00:18:09.765 "data_size": 63488 00:18:09.765 } 00:18:09.765 ] 00:18:09.765 }' 00:18:09.765 22:26:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.765 22:26:08 -- common/autotest_common.sh@10 -- # set +x 00:18:10.361 22:26:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:10.361 22:26:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:10.361 22:26:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.361 22:26:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:10.620 22:26:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:10.620 22:26:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.620 22:26:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:10.879 [2024-07-14 22:26:09.723879] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:10.879 22:26:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:10.879 22:26:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:10.879 22:26:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.879 22:26:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:11.138 22:26:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:11.138 22:26:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.138 22:26:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:11.396 [2024-07-14 22:26:10.218941] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:11.396 [2024-07-14 22:26:10.218973] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.396 [2024-07-14 22:26:10.219029] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.396 [2024-07-14 22:26:10.282196] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.396 [2024-07-14 22:26:10.282230] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:11.396 22:26:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:11.396 22:26:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:11.396 22:26:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.396 22:26:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:11.656 22:26:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:11.656 22:26:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:11.656 22:26:10 -- bdev/bdev_raid.sh@287 -- # killprocess 120346 00:18:11.656 22:26:10 -- common/autotest_common.sh@926 -- # '[' -z 120346 ']' 00:18:11.656 22:26:10 -- common/autotest_common.sh@930 -- # kill -0 120346 00:18:11.656 22:26:10 -- common/autotest_common.sh@931 -- # uname 00:18:11.656 22:26:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.656 22:26:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120346 00:18:11.656 killing process with pid 120346 00:18:11.656 22:26:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:11.656 22:26:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:11.656 22:26:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120346' 00:18:11.656 22:26:10 -- common/autotest_common.sh@945 -- # kill 120346 00:18:11.656 22:26:10 -- common/autotest_common.sh@950 -- # wait 120346 00:18:11.656 [2024-07-14 22:26:10.516857] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.656 [2024-07-14 22:26:10.516954] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.592 ************************************ 00:18:12.592 END TEST raid_state_function_test_sb 00:18:12.592 ************************************ 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:12.592 00:18:12.592 real 0m12.474s 00:18:12.592 user 0m22.101s 00:18:12.592 sys 0m1.587s 00:18:12.592 22:26:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.592 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:12.592 22:26:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:12.592 22:26:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:12.592 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:18:12.592 ************************************ 00:18:12.592 START TEST raid_superblock_test 00:18:12.592 ************************************ 00:18:12.592 22:26:11 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:12.592 22:26:11 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:12.593 22:26:11 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:12.593 22:26:11 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:12.593 22:26:11 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:12.593 22:26:11 -- bdev/bdev_raid.sh@357 -- # raid_pid=120748 00:18:12.593 22:26:11 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120748 /var/tmp/spdk-raid.sock 00:18:12.593 22:26:11 -- common/autotest_common.sh@819 -- # '[' -z 120748 ']' 00:18:12.593 22:26:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:12.593 22:26:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:12.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:12.593 22:26:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:12.593 22:26:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:12.593 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 22:26:11 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:12.593 [2024-07-14 22:26:11.556278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:12.593 [2024-07-14 22:26:11.556716] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120748 ] 00:18:12.851 [2024-07-14 22:26:11.728715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.110 [2024-07-14 22:26:11.944618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.110 [2024-07-14 22:26:12.107399] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.678 22:26:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:13.678 22:26:12 -- common/autotest_common.sh@852 -- # return 0 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.678 22:26:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:13.937 malloc1 00:18:13.937 22:26:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.196 [2024-07-14 22:26:13.020462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.196 [2024-07-14 22:26:13.020623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.196 [2024-07-14 22:26:13.020670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:14.196 [2024-07-14 22:26:13.020737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.196 [2024-07-14 22:26:13.023401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.196 [2024-07-14 22:26:13.023469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.196 pt1 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:14.196 22:26:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:14.453 malloc2 00:18:14.453 22:26:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.711 [2024-07-14 22:26:13.526782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.711 [2024-07-14 22:26:13.526884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.711 [2024-07-14 22:26:13.526930] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:14.711 [2024-07-14 22:26:13.526989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.711 [2024-07-14 22:26:13.529454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.711 [2024-07-14 22:26:13.529519] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.711 pt2 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:14.711 malloc3 00:18:14.711 22:26:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:14.969 [2024-07-14 22:26:13.941540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:14.969 [2024-07-14 22:26:13.941651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.969 [2024-07-14 22:26:13.941694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:14.969 [2024-07-14 22:26:13.941739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.969 [2024-07-14 22:26:13.944092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.969 [2024-07-14 22:26:13.944156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:14.969 pt3 00:18:14.969 22:26:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:14.969 22:26:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:14.969 22:26:13 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:15.227 [2024-07-14 22:26:14.125593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:15.227 [2024-07-14 22:26:14.127982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.227 [2024-07-14 22:26:14.128072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:15.227 [2024-07-14 22:26:14.128348] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:15.227 [2024-07-14 22:26:14.128371] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:15.227 [2024-07-14 22:26:14.128527] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:15.227 [2024-07-14 22:26:14.128951] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:15.227 [2024-07-14 22:26:14.128975] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:18:15.227 [2024-07-14 22:26:14.129178] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.227 22:26:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.484 22:26:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.484 "name": "raid_bdev1", 00:18:15.484 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:15.484 "strip_size_kb": 0, 00:18:15.484 "state": "online", 00:18:15.484 "raid_level": "raid1", 00:18:15.484 "superblock": true, 00:18:15.484 "num_base_bdevs": 3, 00:18:15.484 "num_base_bdevs_discovered": 3, 00:18:15.484 "num_base_bdevs_operational": 3, 00:18:15.484 "base_bdevs_list": [ 00:18:15.484 { 00:18:15.484 "name": "pt1", 00:18:15.484 "uuid": "faf111fd-8fd8-5f5c-b0d4-9463d40856c0", 00:18:15.484 "is_configured": true, 00:18:15.484 "data_offset": 2048, 00:18:15.484 "data_size": 63488 00:18:15.484 }, 00:18:15.484 { 00:18:15.484 "name": "pt2", 00:18:15.484 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:15.484 "is_configured": true, 00:18:15.484 "data_offset": 2048, 00:18:15.484 "data_size": 63488 00:18:15.484 }, 00:18:15.484 { 00:18:15.484 "name": "pt3", 00:18:15.484 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:15.484 "is_configured": true, 00:18:15.485 "data_offset": 2048, 00:18:15.485 "data_size": 63488 00:18:15.485 } 00:18:15.485 ] 00:18:15.485 }' 00:18:15.485 22:26:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.485 22:26:14 -- common/autotest_common.sh@10 -- # set +x 00:18:16.049 22:26:14 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:16.049 22:26:14 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:16.389 [2024-07-14 22:26:15.202109] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.390 22:26:15 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f3392c6b-0587-4f18-9328-bd78dce2d752 00:18:16.390 22:26:15 -- bdev/bdev_raid.sh@380 -- # '[' -z f3392c6b-0587-4f18-9328-bd78dce2d752 ']' 00:18:16.390 22:26:15 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:16.647 [2024-07-14 22:26:15.465892] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.647 [2024-07-14 22:26:15.465956] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.647 [2024-07-14 22:26:15.466070] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.647 [2024-07-14 22:26:15.466209] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.647 [2024-07-14 22:26:15.466224] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:18:16.647 22:26:15 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.647 22:26:15 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:16.647 22:26:15 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:16.648 22:26:15 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:16.648 22:26:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:16.648 22:26:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:16.906 22:26:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:16.906 22:26:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:17.164 22:26:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:17.164 22:26:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:17.422 22:26:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:17.422 22:26:16 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:17.681 22:26:16 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:17.681 22:26:16 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:17.681 22:26:16 -- common/autotest_common.sh@640 -- # local es=0 00:18:17.681 22:26:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:17.681 22:26:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.681 22:26:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.681 22:26:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.681 22:26:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.681 22:26:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.681 22:26:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.681 22:26:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.681 22:26:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:17.681 22:26:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:17.681 [2024-07-14 22:26:16.736248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:17.681 [2024-07-14 22:26:16.739151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:17.681 [2024-07-14 22:26:16.739281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:17.681 [2024-07-14 22:26:16.739368] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:17.681 [2024-07-14 22:26:16.739496] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:17.681 [2024-07-14 22:26:16.739548] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:17.681 [2024-07-14 22:26:16.739648] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.681 [2024-07-14 22:26:16.739675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:18:17.681 request: 00:18:17.681 { 00:18:17.681 "name": "raid_bdev1", 00:18:17.681 "raid_level": "raid1", 00:18:17.681 "base_bdevs": [ 00:18:17.681 "malloc1", 00:18:17.681 "malloc2", 00:18:17.681 "malloc3" 00:18:17.681 ], 00:18:17.681 "superblock": false, 00:18:17.681 "method": "bdev_raid_create", 00:18:17.681 "req_id": 1 00:18:17.681 } 00:18:17.681 Got JSON-RPC error response 00:18:17.681 response: 00:18:17.681 { 00:18:17.681 "code": -17, 00:18:17.681 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:17.681 } 00:18:17.940 22:26:16 -- common/autotest_common.sh@643 -- # es=1 00:18:17.940 22:26:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:17.940 22:26:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:17.940 22:26:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:17.940 22:26:16 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.940 22:26:16 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:17.940 22:26:16 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:17.940 22:26:16 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:17.940 22:26:16 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.198 [2024-07-14 22:26:17.096342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.198 [2024-07-14 22:26:17.096567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.198 [2024-07-14 22:26:17.096642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:18.198 [2024-07-14 22:26:17.096893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.198 [2024-07-14 22:26:17.099606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.198 [2024-07-14 22:26:17.099809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.198 [2024-07-14 22:26:17.100040] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:18.198 [2024-07-14 22:26:17.100201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.198 pt1 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.198 22:26:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.456 22:26:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.456 "name": "raid_bdev1", 00:18:18.456 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:18.456 "strip_size_kb": 0, 00:18:18.456 "state": "configuring", 00:18:18.456 "raid_level": "raid1", 00:18:18.456 "superblock": true, 00:18:18.456 "num_base_bdevs": 3, 00:18:18.456 "num_base_bdevs_discovered": 1, 00:18:18.456 "num_base_bdevs_operational": 3, 00:18:18.456 "base_bdevs_list": [ 00:18:18.456 { 00:18:18.456 "name": "pt1", 00:18:18.456 "uuid": "faf111fd-8fd8-5f5c-b0d4-9463d40856c0", 00:18:18.456 "is_configured": true, 00:18:18.456 "data_offset": 2048, 00:18:18.456 "data_size": 63488 00:18:18.456 }, 00:18:18.456 { 00:18:18.456 "name": null, 00:18:18.456 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:18.456 "is_configured": false, 00:18:18.456 "data_offset": 2048, 00:18:18.456 "data_size": 63488 00:18:18.456 }, 00:18:18.456 { 00:18:18.456 "name": null, 00:18:18.456 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:18.456 "is_configured": false, 00:18:18.456 "data_offset": 2048, 00:18:18.456 "data_size": 63488 00:18:18.456 } 00:18:18.456 ] 00:18:18.456 }' 00:18:18.456 22:26:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.456 22:26:17 -- common/autotest_common.sh@10 -- # set +x 00:18:19.024 22:26:17 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:19.024 22:26:17 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.282 [2024-07-14 22:26:18.132759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.282 [2024-07-14 22:26:18.132867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.282 [2024-07-14 22:26:18.132921] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:19.282 [2024-07-14 22:26:18.132948] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.282 [2024-07-14 22:26:18.133519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.282 [2024-07-14 22:26:18.133578] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.282 [2024-07-14 22:26:18.133713] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:19.282 [2024-07-14 22:26:18.133744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.282 pt2 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:19.282 [2024-07-14 22:26:18.328804] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.282 22:26:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.283 22:26:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.283 22:26:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.283 22:26:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.283 22:26:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.541 22:26:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.541 "name": "raid_bdev1", 00:18:19.541 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:19.541 "strip_size_kb": 0, 00:18:19.541 "state": "configuring", 00:18:19.541 "raid_level": "raid1", 00:18:19.541 "superblock": true, 00:18:19.541 "num_base_bdevs": 3, 00:18:19.541 "num_base_bdevs_discovered": 1, 00:18:19.541 "num_base_bdevs_operational": 3, 00:18:19.541 "base_bdevs_list": [ 00:18:19.541 { 00:18:19.541 "name": "pt1", 00:18:19.541 "uuid": "faf111fd-8fd8-5f5c-b0d4-9463d40856c0", 00:18:19.541 "is_configured": true, 00:18:19.541 "data_offset": 2048, 00:18:19.541 "data_size": 63488 00:18:19.541 }, 00:18:19.541 { 00:18:19.541 "name": null, 00:18:19.541 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:19.541 "is_configured": false, 00:18:19.541 "data_offset": 2048, 00:18:19.541 "data_size": 63488 00:18:19.541 }, 00:18:19.541 { 00:18:19.541 "name": null, 00:18:19.541 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:19.541 "is_configured": false, 00:18:19.541 "data_offset": 2048, 00:18:19.541 "data_size": 63488 00:18:19.541 } 00:18:19.541 ] 00:18:19.541 }' 00:18:19.541 22:26:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.541 22:26:18 -- common/autotest_common.sh@10 -- # set +x 00:18:20.107 22:26:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:20.107 22:26:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:20.107 22:26:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.365 [2024-07-14 22:26:19.232961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.365 [2024-07-14 22:26:19.233087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.365 [2024-07-14 22:26:19.233144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:20.365 [2024-07-14 22:26:19.233173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.365 [2024-07-14 22:26:19.233771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.365 [2024-07-14 22:26:19.233833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.365 [2024-07-14 22:26:19.233991] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:20.365 [2024-07-14 22:26:19.234020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.365 pt2 00:18:20.365 22:26:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:20.365 22:26:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:20.365 22:26:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:20.365 [2024-07-14 22:26:19.417139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:20.365 [2024-07-14 22:26:19.417204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.365 [2024-07-14 22:26:19.417240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:20.365 [2024-07-14 22:26:19.417269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.365 [2024-07-14 22:26:19.417659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.365 [2024-07-14 22:26:19.417701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:20.365 [2024-07-14 22:26:19.417808] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:20.365 [2024-07-14 22:26:19.417835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:20.365 [2024-07-14 22:26:19.417980] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:18:20.365 [2024-07-14 22:26:19.417996] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.365 [2024-07-14 22:26:19.418096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:20.365 [2024-07-14 22:26:19.418470] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:18:20.365 [2024-07-14 22:26:19.418486] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:18:20.365 [2024-07-14 22:26:19.418673] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.365 pt3 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.624 "name": "raid_bdev1", 00:18:20.624 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:20.624 "strip_size_kb": 0, 00:18:20.624 "state": "online", 00:18:20.624 "raid_level": "raid1", 00:18:20.624 "superblock": true, 00:18:20.624 "num_base_bdevs": 3, 00:18:20.624 "num_base_bdevs_discovered": 3, 00:18:20.624 "num_base_bdevs_operational": 3, 00:18:20.624 "base_bdevs_list": [ 00:18:20.624 { 00:18:20.624 "name": "pt1", 00:18:20.624 "uuid": "faf111fd-8fd8-5f5c-b0d4-9463d40856c0", 00:18:20.624 "is_configured": true, 00:18:20.624 "data_offset": 2048, 00:18:20.624 "data_size": 63488 00:18:20.624 }, 00:18:20.624 { 00:18:20.624 "name": "pt2", 00:18:20.624 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:20.624 "is_configured": true, 00:18:20.624 "data_offset": 2048, 00:18:20.624 "data_size": 63488 00:18:20.624 }, 00:18:20.624 { 00:18:20.624 "name": "pt3", 00:18:20.624 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:20.624 "is_configured": true, 00:18:20.624 "data_offset": 2048, 00:18:20.624 "data_size": 63488 00:18:20.624 } 00:18:20.624 ] 00:18:20.624 }' 00:18:20.624 22:26:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.624 22:26:19 -- common/autotest_common.sh@10 -- # set +x 00:18:21.191 22:26:20 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:21.191 22:26:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:21.449 [2024-07-14 22:26:20.325613] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.449 22:26:20 -- bdev/bdev_raid.sh@430 -- # '[' f3392c6b-0587-4f18-9328-bd78dce2d752 '!=' f3392c6b-0587-4f18-9328-bd78dce2d752 ']' 00:18:21.449 22:26:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:21.449 22:26:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:21.449 22:26:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:21.449 22:26:20 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:21.449 [2024-07-14 22:26:20.505462] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.707 "name": "raid_bdev1", 00:18:21.707 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:21.707 "strip_size_kb": 0, 00:18:21.707 "state": "online", 00:18:21.707 "raid_level": "raid1", 00:18:21.707 "superblock": true, 00:18:21.707 "num_base_bdevs": 3, 00:18:21.707 "num_base_bdevs_discovered": 2, 00:18:21.707 "num_base_bdevs_operational": 2, 00:18:21.707 "base_bdevs_list": [ 00:18:21.707 { 00:18:21.707 "name": null, 00:18:21.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.707 "is_configured": false, 00:18:21.707 "data_offset": 2048, 00:18:21.707 "data_size": 63488 00:18:21.707 }, 00:18:21.707 { 00:18:21.707 "name": "pt2", 00:18:21.707 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:21.707 "is_configured": true, 00:18:21.707 "data_offset": 2048, 00:18:21.707 "data_size": 63488 00:18:21.707 }, 00:18:21.707 { 00:18:21.707 "name": "pt3", 00:18:21.707 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:21.707 "is_configured": true, 00:18:21.707 "data_offset": 2048, 00:18:21.707 "data_size": 63488 00:18:21.707 } 00:18:21.707 ] 00:18:21.707 }' 00:18:21.707 22:26:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.707 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.272 22:26:21 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:22.530 [2024-07-14 22:26:21.553655] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.530 [2024-07-14 22:26:21.553696] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.530 [2024-07-14 22:26:21.553788] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.530 [2024-07-14 22:26:21.553893] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.530 [2024-07-14 22:26:21.553912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:22.530 22:26:21 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.530 22:26:21 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:22.788 22:26:21 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:22.788 22:26:21 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:22.788 22:26:21 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:22.788 22:26:21 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:22.788 22:26:21 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:23.046 22:26:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:23.046 22:26:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:23.046 22:26:22 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:23.305 22:26:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:23.305 22:26:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:23.305 22:26:22 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:23.305 22:26:22 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:23.305 22:26:22 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.564 [2024-07-14 22:26:22.413933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.564 [2024-07-14 22:26:22.414024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.564 [2024-07-14 22:26:22.414063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:23.564 [2024-07-14 22:26:22.414087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.564 [2024-07-14 22:26:22.416571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.564 [2024-07-14 22:26:22.416633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.564 [2024-07-14 22:26:22.416753] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:23.564 [2024-07-14 22:26:22.416811] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.564 pt2 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.564 "name": "raid_bdev1", 00:18:23.564 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:23.564 "strip_size_kb": 0, 00:18:23.564 "state": "configuring", 00:18:23.564 "raid_level": "raid1", 00:18:23.564 "superblock": true, 00:18:23.564 "num_base_bdevs": 3, 00:18:23.564 "num_base_bdevs_discovered": 1, 00:18:23.564 "num_base_bdevs_operational": 2, 00:18:23.564 "base_bdevs_list": [ 00:18:23.564 { 00:18:23.564 "name": null, 00:18:23.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.564 "is_configured": false, 00:18:23.564 "data_offset": 2048, 00:18:23.564 "data_size": 63488 00:18:23.564 }, 00:18:23.564 { 00:18:23.564 "name": "pt2", 00:18:23.564 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:23.564 "is_configured": true, 00:18:23.564 "data_offset": 2048, 00:18:23.564 "data_size": 63488 00:18:23.564 }, 00:18:23.564 { 00:18:23.564 "name": null, 00:18:23.564 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:23.564 "is_configured": false, 00:18:23.564 "data_offset": 2048, 00:18:23.564 "data_size": 63488 00:18:23.564 } 00:18:23.564 ] 00:18:23.564 }' 00:18:23.564 22:26:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.564 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:24.498 [2024-07-14 22:26:23.470290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:24.498 [2024-07-14 22:26:23.470455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.498 [2024-07-14 22:26:23.470505] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:24.498 [2024-07-14 22:26:23.470532] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.498 [2024-07-14 22:26:23.471379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.498 [2024-07-14 22:26:23.471457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:24.498 [2024-07-14 22:26:23.471614] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:24.498 [2024-07-14 22:26:23.471644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:24.498 [2024-07-14 22:26:23.472008] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:18:24.498 [2024-07-14 22:26:23.472033] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:24.498 [2024-07-14 22:26:23.472308] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:24.498 [2024-07-14 22:26:23.472811] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:18:24.498 [2024-07-14 22:26:23.472835] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:18:24.498 [2024-07-14 22:26:23.472983] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.498 pt3 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.498 22:26:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.756 22:26:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.756 "name": "raid_bdev1", 00:18:24.756 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:24.756 "strip_size_kb": 0, 00:18:24.756 "state": "online", 00:18:24.756 "raid_level": "raid1", 00:18:24.756 "superblock": true, 00:18:24.756 "num_base_bdevs": 3, 00:18:24.756 "num_base_bdevs_discovered": 2, 00:18:24.756 "num_base_bdevs_operational": 2, 00:18:24.756 "base_bdevs_list": [ 00:18:24.756 { 00:18:24.756 "name": null, 00:18:24.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.756 "is_configured": false, 00:18:24.756 "data_offset": 2048, 00:18:24.756 "data_size": 63488 00:18:24.756 }, 00:18:24.756 { 00:18:24.756 "name": "pt2", 00:18:24.756 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:24.756 "is_configured": true, 00:18:24.756 "data_offset": 2048, 00:18:24.756 "data_size": 63488 00:18:24.756 }, 00:18:24.756 { 00:18:24.756 "name": "pt3", 00:18:24.756 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:24.756 "is_configured": true, 00:18:24.756 "data_offset": 2048, 00:18:24.756 "data_size": 63488 00:18:24.756 } 00:18:24.756 ] 00:18:24.756 }' 00:18:24.757 22:26:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.757 22:26:23 -- common/autotest_common.sh@10 -- # set +x 00:18:25.322 22:26:24 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:25.322 22:26:24 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:25.581 [2024-07-14 22:26:24.514720] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.581 [2024-07-14 22:26:24.514801] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.581 [2024-07-14 22:26:24.514916] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.581 [2024-07-14 22:26:24.514996] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.581 [2024-07-14 22:26:24.515009] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:18:25.581 22:26:24 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:25.581 22:26:24 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.839 22:26:24 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:25.839 22:26:24 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:25.839 22:26:24 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:26.097 [2024-07-14 22:26:24.926851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:26.097 [2024-07-14 22:26:24.927013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.097 [2024-07-14 22:26:24.927068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:26.097 [2024-07-14 22:26:24.927093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.097 [2024-07-14 22:26:24.929812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.097 [2024-07-14 22:26:24.929880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:26.097 [2024-07-14 22:26:24.930022] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:26.097 [2024-07-14 22:26:24.930380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:26.097 pt1 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.097 22:26:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.355 22:26:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.355 "name": "raid_bdev1", 00:18:26.355 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:26.355 "strip_size_kb": 0, 00:18:26.355 "state": "configuring", 00:18:26.355 "raid_level": "raid1", 00:18:26.355 "superblock": true, 00:18:26.355 "num_base_bdevs": 3, 00:18:26.355 "num_base_bdevs_discovered": 1, 00:18:26.355 "num_base_bdevs_operational": 3, 00:18:26.355 "base_bdevs_list": [ 00:18:26.355 { 00:18:26.355 "name": "pt1", 00:18:26.355 "uuid": "faf111fd-8fd8-5f5c-b0d4-9463d40856c0", 00:18:26.355 "is_configured": true, 00:18:26.355 "data_offset": 2048, 00:18:26.355 "data_size": 63488 00:18:26.355 }, 00:18:26.355 { 00:18:26.355 "name": null, 00:18:26.355 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:26.355 "is_configured": false, 00:18:26.355 "data_offset": 2048, 00:18:26.355 "data_size": 63488 00:18:26.355 }, 00:18:26.355 { 00:18:26.355 "name": null, 00:18:26.355 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:26.355 "is_configured": false, 00:18:26.355 "data_offset": 2048, 00:18:26.355 "data_size": 63488 00:18:26.355 } 00:18:26.355 ] 00:18:26.355 }' 00:18:26.355 22:26:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.355 22:26:25 -- common/autotest_common.sh@10 -- # set +x 00:18:26.921 22:26:25 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:26.921 22:26:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:26.921 22:26:25 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:26.921 22:26:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:26.921 22:26:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:26.921 22:26:25 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:27.179 22:26:26 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:27.179 22:26:26 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:27.179 22:26:26 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:27.179 22:26:26 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:27.437 [2024-07-14 22:26:26.355851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:27.437 [2024-07-14 22:26:26.355967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.438 [2024-07-14 22:26:26.356015] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:27.438 [2024-07-14 22:26:26.356055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.438 [2024-07-14 22:26:26.356700] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.438 [2024-07-14 22:26:26.356756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:27.438 [2024-07-14 22:26:26.356905] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:27.438 [2024-07-14 22:26:26.356926] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:27.438 [2024-07-14 22:26:26.356936] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.438 [2024-07-14 22:26:26.356974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:18:27.438 [2024-07-14 22:26:26.357085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:27.438 pt3 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.438 22:26:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.696 22:26:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.696 "name": "raid_bdev1", 00:18:27.696 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:27.696 "strip_size_kb": 0, 00:18:27.696 "state": "configuring", 00:18:27.696 "raid_level": "raid1", 00:18:27.696 "superblock": true, 00:18:27.696 "num_base_bdevs": 3, 00:18:27.696 "num_base_bdevs_discovered": 1, 00:18:27.696 "num_base_bdevs_operational": 2, 00:18:27.696 "base_bdevs_list": [ 00:18:27.696 { 00:18:27.696 "name": null, 00:18:27.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.696 "is_configured": false, 00:18:27.696 "data_offset": 2048, 00:18:27.696 "data_size": 63488 00:18:27.696 }, 00:18:27.696 { 00:18:27.696 "name": null, 00:18:27.696 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:27.696 "is_configured": false, 00:18:27.696 "data_offset": 2048, 00:18:27.696 "data_size": 63488 00:18:27.696 }, 00:18:27.696 { 00:18:27.696 "name": "pt3", 00:18:27.696 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:27.696 "is_configured": true, 00:18:27.696 "data_offset": 2048, 00:18:27.696 "data_size": 63488 00:18:27.696 } 00:18:27.696 ] 00:18:27.696 }' 00:18:27.696 22:26:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.696 22:26:26 -- common/autotest_common.sh@10 -- # set +x 00:18:28.261 22:26:27 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:28.261 22:26:27 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:28.261 22:26:27 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.520 [2024-07-14 22:26:27.376235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.520 [2024-07-14 22:26:27.376897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.520 [2024-07-14 22:26:27.377085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:28.520 [2024-07-14 22:26:27.377244] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.520 [2024-07-14 22:26:27.378007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.520 [2024-07-14 22:26:27.378178] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.520 [2024-07-14 22:26:27.378447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:28.520 [2024-07-14 22:26:27.378513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.520 [2024-07-14 22:26:27.378697] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:18:28.520 [2024-07-14 22:26:27.378712] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:28.520 [2024-07-14 22:26:27.378880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:28.520 [2024-07-14 22:26:27.379337] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:18:28.520 [2024-07-14 22:26:27.379367] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:18:28.520 [2024-07-14 22:26:27.379572] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.520 pt2 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.520 22:26:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.778 22:26:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.778 "name": "raid_bdev1", 00:18:28.778 "uuid": "f3392c6b-0587-4f18-9328-bd78dce2d752", 00:18:28.778 "strip_size_kb": 0, 00:18:28.778 "state": "online", 00:18:28.778 "raid_level": "raid1", 00:18:28.778 "superblock": true, 00:18:28.778 "num_base_bdevs": 3, 00:18:28.778 "num_base_bdevs_discovered": 2, 00:18:28.778 "num_base_bdevs_operational": 2, 00:18:28.778 "base_bdevs_list": [ 00:18:28.778 { 00:18:28.778 "name": null, 00:18:28.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.778 "is_configured": false, 00:18:28.779 "data_offset": 2048, 00:18:28.779 "data_size": 63488 00:18:28.779 }, 00:18:28.779 { 00:18:28.779 "name": "pt2", 00:18:28.779 "uuid": "8010b257-d4dc-5c60-b9cc-e77b4e624a93", 00:18:28.779 "is_configured": true, 00:18:28.779 "data_offset": 2048, 00:18:28.779 "data_size": 63488 00:18:28.779 }, 00:18:28.779 { 00:18:28.779 "name": "pt3", 00:18:28.779 "uuid": "cc47d726-1063-579e-a3e8-6f528b3a0805", 00:18:28.779 "is_configured": true, 00:18:28.779 "data_offset": 2048, 00:18:28.779 "data_size": 63488 00:18:28.779 } 00:18:28.779 ] 00:18:28.779 }' 00:18:28.779 22:26:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.779 22:26:27 -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 22:26:28 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:29.345 22:26:28 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:29.603 [2024-07-14 22:26:28.528695] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.603 22:26:28 -- bdev/bdev_raid.sh@506 -- # '[' f3392c6b-0587-4f18-9328-bd78dce2d752 '!=' f3392c6b-0587-4f18-9328-bd78dce2d752 ']' 00:18:29.603 22:26:28 -- bdev/bdev_raid.sh@511 -- # killprocess 120748 00:18:29.603 22:26:28 -- common/autotest_common.sh@926 -- # '[' -z 120748 ']' 00:18:29.603 22:26:28 -- common/autotest_common.sh@930 -- # kill -0 120748 00:18:29.603 22:26:28 -- common/autotest_common.sh@931 -- # uname 00:18:29.603 22:26:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.603 22:26:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120748 00:18:29.603 killing process with pid 120748 00:18:29.603 22:26:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:29.603 22:26:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:29.603 22:26:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120748' 00:18:29.603 22:26:28 -- common/autotest_common.sh@945 -- # kill 120748 00:18:29.603 22:26:28 -- common/autotest_common.sh@950 -- # wait 120748 00:18:29.603 [2024-07-14 22:26:28.576247] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.603 [2024-07-14 22:26:28.576378] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.603 [2024-07-14 22:26:28.576469] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.603 [2024-07-14 22:26:28.576488] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:18:29.861 [2024-07-14 22:26:28.799060] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.235 ************************************ 00:18:31.235 END TEST raid_superblock_test 00:18:31.235 ************************************ 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:31.235 00:18:31.235 real 0m18.433s 00:18:31.235 user 0m33.754s 00:18:31.235 sys 0m2.069s 00:18:31.235 22:26:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.235 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:31.235 22:26:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:31.235 22:26:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:31.235 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:18:31.235 ************************************ 00:18:31.235 START TEST raid_state_function_test 00:18:31.235 ************************************ 00:18:31.235 22:26:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=121379 00:18:31.235 Process raid pid: 121379 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121379' 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121379 /var/tmp/spdk-raid.sock 00:18:31.235 22:26:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:31.235 22:26:29 -- common/autotest_common.sh@819 -- # '[' -z 121379 ']' 00:18:31.235 22:26:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:31.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:31.235 22:26:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:31.235 22:26:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:31.235 22:26:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:31.235 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:18:31.235 [2024-07-14 22:26:30.044250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:31.235 [2024-07-14 22:26:30.044487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.235 [2024-07-14 22:26:30.217253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.492 [2024-07-14 22:26:30.417149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.749 [2024-07-14 22:26:30.589834] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.006 22:26:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:32.006 22:26:30 -- common/autotest_common.sh@852 -- # return 0 00:18:32.006 22:26:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:32.264 [2024-07-14 22:26:31.088380] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.264 [2024-07-14 22:26:31.088472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.264 [2024-07-14 22:26:31.088485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.264 [2024-07-14 22:26:31.088506] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.264 [2024-07-14 22:26:31.088514] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:32.264 [2024-07-14 22:26:31.088566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:32.264 [2024-07-14 22:26:31.088574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:32.264 [2024-07-14 22:26:31.088595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.264 "name": "Existed_Raid", 00:18:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.264 "strip_size_kb": 64, 00:18:32.264 "state": "configuring", 00:18:32.264 "raid_level": "raid0", 00:18:32.264 "superblock": false, 00:18:32.264 "num_base_bdevs": 4, 00:18:32.264 "num_base_bdevs_discovered": 0, 00:18:32.264 "num_base_bdevs_operational": 4, 00:18:32.264 "base_bdevs_list": [ 00:18:32.264 { 00:18:32.264 "name": "BaseBdev1", 00:18:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.264 "is_configured": false, 00:18:32.264 "data_offset": 0, 00:18:32.264 "data_size": 0 00:18:32.264 }, 00:18:32.264 { 00:18:32.264 "name": "BaseBdev2", 00:18:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.264 "is_configured": false, 00:18:32.264 "data_offset": 0, 00:18:32.264 "data_size": 0 00:18:32.264 }, 00:18:32.264 { 00:18:32.264 "name": "BaseBdev3", 00:18:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.264 "is_configured": false, 00:18:32.264 "data_offset": 0, 00:18:32.264 "data_size": 0 00:18:32.264 }, 00:18:32.264 { 00:18:32.264 "name": "BaseBdev4", 00:18:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.264 "is_configured": false, 00:18:32.264 "data_offset": 0, 00:18:32.264 "data_size": 0 00:18:32.264 } 00:18:32.264 ] 00:18:32.264 }' 00:18:32.264 22:26:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.264 22:26:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.829 22:26:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:33.086 [2024-07-14 22:26:32.128486] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.086 [2024-07-14 22:26:32.128547] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:33.086 22:26:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.343 [2024-07-14 22:26:32.392559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.343 [2024-07-14 22:26:32.392632] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.343 [2024-07-14 22:26:32.392641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.343 [2024-07-14 22:26:32.392676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.343 [2024-07-14 22:26:32.392684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.343 [2024-07-14 22:26:32.392749] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.343 [2024-07-14 22:26:32.392757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:33.343 [2024-07-14 22:26:32.392778] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:33.343 22:26:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:33.914 [2024-07-14 22:26:32.664008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.914 BaseBdev1 00:18:33.914 22:26:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:33.914 22:26:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:33.914 22:26:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:33.914 22:26:32 -- common/autotest_common.sh@889 -- # local i 00:18:33.914 22:26:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:33.914 22:26:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:33.914 22:26:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.914 22:26:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:34.234 [ 00:18:34.234 { 00:18:34.234 "name": "BaseBdev1", 00:18:34.234 "aliases": [ 00:18:34.234 "2fa58531-d33d-40de-8cf9-c779a9b27ad7" 00:18:34.234 ], 00:18:34.234 "product_name": "Malloc disk", 00:18:34.234 "block_size": 512, 00:18:34.234 "num_blocks": 65536, 00:18:34.234 "uuid": "2fa58531-d33d-40de-8cf9-c779a9b27ad7", 00:18:34.234 "assigned_rate_limits": { 00:18:34.234 "rw_ios_per_sec": 0, 00:18:34.234 "rw_mbytes_per_sec": 0, 00:18:34.234 "r_mbytes_per_sec": 0, 00:18:34.234 "w_mbytes_per_sec": 0 00:18:34.234 }, 00:18:34.234 "claimed": true, 00:18:34.234 "claim_type": "exclusive_write", 00:18:34.234 "zoned": false, 00:18:34.234 "supported_io_types": { 00:18:34.234 "read": true, 00:18:34.234 "write": true, 00:18:34.234 "unmap": true, 00:18:34.234 "write_zeroes": true, 00:18:34.234 "flush": true, 00:18:34.234 "reset": true, 00:18:34.234 "compare": false, 00:18:34.234 "compare_and_write": false, 00:18:34.234 "abort": true, 00:18:34.234 "nvme_admin": false, 00:18:34.234 "nvme_io": false 00:18:34.234 }, 00:18:34.234 "memory_domains": [ 00:18:34.234 { 00:18:34.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.234 "dma_device_type": 2 00:18:34.234 } 00:18:34.234 ], 00:18:34.234 "driver_specific": {} 00:18:34.234 } 00:18:34.234 ] 00:18:34.234 22:26:33 -- common/autotest_common.sh@895 -- # return 0 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.234 22:26:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.235 22:26:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.508 22:26:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.508 "name": "Existed_Raid", 00:18:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.508 "strip_size_kb": 64, 00:18:34.508 "state": "configuring", 00:18:34.508 "raid_level": "raid0", 00:18:34.508 "superblock": false, 00:18:34.508 "num_base_bdevs": 4, 00:18:34.508 "num_base_bdevs_discovered": 1, 00:18:34.508 "num_base_bdevs_operational": 4, 00:18:34.508 "base_bdevs_list": [ 00:18:34.508 { 00:18:34.508 "name": "BaseBdev1", 00:18:34.508 "uuid": "2fa58531-d33d-40de-8cf9-c779a9b27ad7", 00:18:34.508 "is_configured": true, 00:18:34.508 "data_offset": 0, 00:18:34.508 "data_size": 65536 00:18:34.508 }, 00:18:34.508 { 00:18:34.508 "name": "BaseBdev2", 00:18:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.508 "is_configured": false, 00:18:34.508 "data_offset": 0, 00:18:34.508 "data_size": 0 00:18:34.508 }, 00:18:34.508 { 00:18:34.508 "name": "BaseBdev3", 00:18:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.508 "is_configured": false, 00:18:34.508 "data_offset": 0, 00:18:34.508 "data_size": 0 00:18:34.508 }, 00:18:34.508 { 00:18:34.508 "name": "BaseBdev4", 00:18:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.508 "is_configured": false, 00:18:34.508 "data_offset": 0, 00:18:34.508 "data_size": 0 00:18:34.508 } 00:18:34.508 ] 00:18:34.508 }' 00:18:34.508 22:26:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.508 22:26:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.074 22:26:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:35.074 [2024-07-14 22:26:34.060398] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.074 [2024-07-14 22:26:34.060466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:35.074 22:26:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:35.074 22:26:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:35.332 [2024-07-14 22:26:34.256483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.332 [2024-07-14 22:26:34.258301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.332 [2024-07-14 22:26:34.258377] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.332 [2024-07-14 22:26:34.258388] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:35.332 [2024-07-14 22:26:34.258410] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:35.332 [2024-07-14 22:26:34.258418] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:35.332 [2024-07-14 22:26:34.258433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.332 22:26:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.590 22:26:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.590 "name": "Existed_Raid", 00:18:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.590 "strip_size_kb": 64, 00:18:35.590 "state": "configuring", 00:18:35.590 "raid_level": "raid0", 00:18:35.590 "superblock": false, 00:18:35.590 "num_base_bdevs": 4, 00:18:35.590 "num_base_bdevs_discovered": 1, 00:18:35.590 "num_base_bdevs_operational": 4, 00:18:35.590 "base_bdevs_list": [ 00:18:35.590 { 00:18:35.590 "name": "BaseBdev1", 00:18:35.590 "uuid": "2fa58531-d33d-40de-8cf9-c779a9b27ad7", 00:18:35.590 "is_configured": true, 00:18:35.590 "data_offset": 0, 00:18:35.590 "data_size": 65536 00:18:35.590 }, 00:18:35.590 { 00:18:35.590 "name": "BaseBdev2", 00:18:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.590 "is_configured": false, 00:18:35.590 "data_offset": 0, 00:18:35.590 "data_size": 0 00:18:35.590 }, 00:18:35.590 { 00:18:35.590 "name": "BaseBdev3", 00:18:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.590 "is_configured": false, 00:18:35.590 "data_offset": 0, 00:18:35.590 "data_size": 0 00:18:35.590 }, 00:18:35.590 { 00:18:35.590 "name": "BaseBdev4", 00:18:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.590 "is_configured": false, 00:18:35.590 "data_offset": 0, 00:18:35.590 "data_size": 0 00:18:35.590 } 00:18:35.590 ] 00:18:35.590 }' 00:18:35.590 22:26:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.590 22:26:34 -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 22:26:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:36.415 [2024-07-14 22:26:35.299548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.415 BaseBdev2 00:18:36.415 22:26:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:36.415 22:26:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:36.415 22:26:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:36.415 22:26:35 -- common/autotest_common.sh@889 -- # local i 00:18:36.415 22:26:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:36.415 22:26:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:36.415 22:26:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.673 22:26:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:36.933 [ 00:18:36.933 { 00:18:36.933 "name": "BaseBdev2", 00:18:36.933 "aliases": [ 00:18:36.933 "d5c828d8-6516-41da-a90c-7a5e081bb80c" 00:18:36.933 ], 00:18:36.933 "product_name": "Malloc disk", 00:18:36.933 "block_size": 512, 00:18:36.933 "num_blocks": 65536, 00:18:36.933 "uuid": "d5c828d8-6516-41da-a90c-7a5e081bb80c", 00:18:36.933 "assigned_rate_limits": { 00:18:36.933 "rw_ios_per_sec": 0, 00:18:36.933 "rw_mbytes_per_sec": 0, 00:18:36.933 "r_mbytes_per_sec": 0, 00:18:36.933 "w_mbytes_per_sec": 0 00:18:36.933 }, 00:18:36.933 "claimed": true, 00:18:36.933 "claim_type": "exclusive_write", 00:18:36.933 "zoned": false, 00:18:36.933 "supported_io_types": { 00:18:36.933 "read": true, 00:18:36.933 "write": true, 00:18:36.933 "unmap": true, 00:18:36.933 "write_zeroes": true, 00:18:36.933 "flush": true, 00:18:36.933 "reset": true, 00:18:36.933 "compare": false, 00:18:36.933 "compare_and_write": false, 00:18:36.933 "abort": true, 00:18:36.933 "nvme_admin": false, 00:18:36.933 "nvme_io": false 00:18:36.933 }, 00:18:36.933 "memory_domains": [ 00:18:36.933 { 00:18:36.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.933 "dma_device_type": 2 00:18:36.933 } 00:18:36.933 ], 00:18:36.933 "driver_specific": {} 00:18:36.933 } 00:18:36.933 ] 00:18:36.933 22:26:35 -- common/autotest_common.sh@895 -- # return 0 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.933 22:26:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.193 22:26:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.193 "name": "Existed_Raid", 00:18:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.193 "strip_size_kb": 64, 00:18:37.193 "state": "configuring", 00:18:37.193 "raid_level": "raid0", 00:18:37.193 "superblock": false, 00:18:37.193 "num_base_bdevs": 4, 00:18:37.193 "num_base_bdevs_discovered": 2, 00:18:37.193 "num_base_bdevs_operational": 4, 00:18:37.193 "base_bdevs_list": [ 00:18:37.193 { 00:18:37.193 "name": "BaseBdev1", 00:18:37.193 "uuid": "2fa58531-d33d-40de-8cf9-c779a9b27ad7", 00:18:37.193 "is_configured": true, 00:18:37.193 "data_offset": 0, 00:18:37.193 "data_size": 65536 00:18:37.193 }, 00:18:37.193 { 00:18:37.193 "name": "BaseBdev2", 00:18:37.193 "uuid": "d5c828d8-6516-41da-a90c-7a5e081bb80c", 00:18:37.193 "is_configured": true, 00:18:37.193 "data_offset": 0, 00:18:37.193 "data_size": 65536 00:18:37.193 }, 00:18:37.193 { 00:18:37.193 "name": "BaseBdev3", 00:18:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.193 "is_configured": false, 00:18:37.193 "data_offset": 0, 00:18:37.193 "data_size": 0 00:18:37.193 }, 00:18:37.193 { 00:18:37.193 "name": "BaseBdev4", 00:18:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.193 "is_configured": false, 00:18:37.193 "data_offset": 0, 00:18:37.193 "data_size": 0 00:18:37.193 } 00:18:37.193 ] 00:18:37.193 }' 00:18:37.193 22:26:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.193 22:26:36 -- common/autotest_common.sh@10 -- # set +x 00:18:37.760 22:26:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:38.018 [2024-07-14 22:26:36.831583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.018 BaseBdev3 00:18:38.018 22:26:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:38.018 22:26:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:38.018 22:26:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:38.018 22:26:36 -- common/autotest_common.sh@889 -- # local i 00:18:38.018 22:26:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:38.018 22:26:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:38.018 22:26:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.277 22:26:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:38.277 [ 00:18:38.277 { 00:18:38.277 "name": "BaseBdev3", 00:18:38.277 "aliases": [ 00:18:38.277 "9386b20e-7b13-4696-b452-e05d5efe6677" 00:18:38.277 ], 00:18:38.277 "product_name": "Malloc disk", 00:18:38.277 "block_size": 512, 00:18:38.277 "num_blocks": 65536, 00:18:38.277 "uuid": "9386b20e-7b13-4696-b452-e05d5efe6677", 00:18:38.277 "assigned_rate_limits": { 00:18:38.277 "rw_ios_per_sec": 0, 00:18:38.277 "rw_mbytes_per_sec": 0, 00:18:38.277 "r_mbytes_per_sec": 0, 00:18:38.277 "w_mbytes_per_sec": 0 00:18:38.277 }, 00:18:38.277 "claimed": true, 00:18:38.277 "claim_type": "exclusive_write", 00:18:38.277 "zoned": false, 00:18:38.277 "supported_io_types": { 00:18:38.277 "read": true, 00:18:38.277 "write": true, 00:18:38.277 "unmap": true, 00:18:38.277 "write_zeroes": true, 00:18:38.277 "flush": true, 00:18:38.277 "reset": true, 00:18:38.277 "compare": false, 00:18:38.277 "compare_and_write": false, 00:18:38.277 "abort": true, 00:18:38.277 "nvme_admin": false, 00:18:38.277 "nvme_io": false 00:18:38.277 }, 00:18:38.277 "memory_domains": [ 00:18:38.277 { 00:18:38.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.277 "dma_device_type": 2 00:18:38.277 } 00:18:38.277 ], 00:18:38.277 "driver_specific": {} 00:18:38.277 } 00:18:38.277 ] 00:18:38.277 22:26:37 -- common/autotest_common.sh@895 -- # return 0 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.277 22:26:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.536 22:26:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.536 "name": "Existed_Raid", 00:18:38.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.536 "strip_size_kb": 64, 00:18:38.536 "state": "configuring", 00:18:38.536 "raid_level": "raid0", 00:18:38.536 "superblock": false, 00:18:38.536 "num_base_bdevs": 4, 00:18:38.536 "num_base_bdevs_discovered": 3, 00:18:38.536 "num_base_bdevs_operational": 4, 00:18:38.536 "base_bdevs_list": [ 00:18:38.536 { 00:18:38.536 "name": "BaseBdev1", 00:18:38.536 "uuid": "2fa58531-d33d-40de-8cf9-c779a9b27ad7", 00:18:38.536 "is_configured": true, 00:18:38.536 "data_offset": 0, 00:18:38.536 "data_size": 65536 00:18:38.536 }, 00:18:38.536 { 00:18:38.536 "name": "BaseBdev2", 00:18:38.536 "uuid": "d5c828d8-6516-41da-a90c-7a5e081bb80c", 00:18:38.536 "is_configured": true, 00:18:38.536 "data_offset": 0, 00:18:38.536 "data_size": 65536 00:18:38.536 }, 00:18:38.536 { 00:18:38.536 "name": "BaseBdev3", 00:18:38.536 "uuid": "9386b20e-7b13-4696-b452-e05d5efe6677", 00:18:38.536 "is_configured": true, 00:18:38.536 "data_offset": 0, 00:18:38.536 "data_size": 65536 00:18:38.536 }, 00:18:38.536 { 00:18:38.536 "name": "BaseBdev4", 00:18:38.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.536 "is_configured": false, 00:18:38.536 "data_offset": 0, 00:18:38.536 "data_size": 0 00:18:38.536 } 00:18:38.536 ] 00:18:38.536 }' 00:18:38.536 22:26:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.536 22:26:37 -- common/autotest_common.sh@10 -- # set +x 00:18:39.472 22:26:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:39.472 [2024-07-14 22:26:38.492099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.472 [2024-07-14 22:26:38.492151] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:39.472 [2024-07-14 22:26:38.492160] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:39.472 [2024-07-14 22:26:38.492288] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:39.472 [2024-07-14 22:26:38.492609] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:39.472 [2024-07-14 22:26:38.492630] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:39.472 [2024-07-14 22:26:38.492874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.472 BaseBdev4 00:18:39.472 22:26:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:39.472 22:26:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:39.472 22:26:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:39.472 22:26:38 -- common/autotest_common.sh@889 -- # local i 00:18:39.472 22:26:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:39.472 22:26:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:39.472 22:26:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.731 22:26:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:39.989 [ 00:18:39.989 { 00:18:39.989 "name": "BaseBdev4", 00:18:39.989 "aliases": [ 00:18:39.989 "1bafb79e-fbcb-4e4b-a92e-c2aa4e469ac1" 00:18:39.989 ], 00:18:39.989 "product_name": "Malloc disk", 00:18:39.989 "block_size": 512, 00:18:39.989 "num_blocks": 65536, 00:18:39.989 "uuid": "1bafb79e-fbcb-4e4b-a92e-c2aa4e469ac1", 00:18:39.989 "assigned_rate_limits": { 00:18:39.989 "rw_ios_per_sec": 0, 00:18:39.989 "rw_mbytes_per_sec": 0, 00:18:39.989 "r_mbytes_per_sec": 0, 00:18:39.989 "w_mbytes_per_sec": 0 00:18:39.989 }, 00:18:39.989 "claimed": true, 00:18:39.989 "claim_type": "exclusive_write", 00:18:39.989 "zoned": false, 00:18:39.989 "supported_io_types": { 00:18:39.989 "read": true, 00:18:39.989 "write": true, 00:18:39.989 "unmap": true, 00:18:39.989 "write_zeroes": true, 00:18:39.989 "flush": true, 00:18:39.989 "reset": true, 00:18:39.989 "compare": false, 00:18:39.989 "compare_and_write": false, 00:18:39.989 "abort": true, 00:18:39.989 "nvme_admin": false, 00:18:39.989 "nvme_io": false 00:18:39.989 }, 00:18:39.989 "memory_domains": [ 00:18:39.989 { 00:18:39.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.989 "dma_device_type": 2 00:18:39.989 } 00:18:39.989 ], 00:18:39.989 "driver_specific": {} 00:18:39.989 } 00:18:39.989 ] 00:18:39.990 22:26:38 -- common/autotest_common.sh@895 -- # return 0 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.990 22:26:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.248 22:26:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.248 "name": "Existed_Raid", 00:18:40.248 "uuid": "13d0d0f2-cdce-4d4d-865e-803e517816d4", 00:18:40.248 "strip_size_kb": 64, 00:18:40.248 "state": "online", 00:18:40.248 "raid_level": "raid0", 00:18:40.248 "superblock": false, 00:18:40.248 "num_base_bdevs": 4, 00:18:40.248 "num_base_bdevs_discovered": 4, 00:18:40.248 "num_base_bdevs_operational": 4, 00:18:40.248 "base_bdevs_list": [ 00:18:40.248 { 00:18:40.248 "name": "BaseBdev1", 00:18:40.248 "uuid": "2fa58531-d33d-40de-8cf9-c779a9b27ad7", 00:18:40.248 "is_configured": true, 00:18:40.248 "data_offset": 0, 00:18:40.248 "data_size": 65536 00:18:40.248 }, 00:18:40.248 { 00:18:40.248 "name": "BaseBdev2", 00:18:40.248 "uuid": "d5c828d8-6516-41da-a90c-7a5e081bb80c", 00:18:40.248 "is_configured": true, 00:18:40.248 "data_offset": 0, 00:18:40.248 "data_size": 65536 00:18:40.248 }, 00:18:40.248 { 00:18:40.248 "name": "BaseBdev3", 00:18:40.248 "uuid": "9386b20e-7b13-4696-b452-e05d5efe6677", 00:18:40.248 "is_configured": true, 00:18:40.248 "data_offset": 0, 00:18:40.248 "data_size": 65536 00:18:40.248 }, 00:18:40.248 { 00:18:40.248 "name": "BaseBdev4", 00:18:40.248 "uuid": "1bafb79e-fbcb-4e4b-a92e-c2aa4e469ac1", 00:18:40.248 "is_configured": true, 00:18:40.248 "data_offset": 0, 00:18:40.248 "data_size": 65536 00:18:40.248 } 00:18:40.248 ] 00:18:40.248 }' 00:18:40.248 22:26:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.248 22:26:39 -- common/autotest_common.sh@10 -- # set +x 00:18:40.811 22:26:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:40.811 [2024-07-14 22:26:39.864519] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.811 [2024-07-14 22:26:39.864552] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.811 [2024-07-14 22:26:39.864635] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.068 22:26:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.325 22:26:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.325 "name": "Existed_Raid", 00:18:41.325 "uuid": "13d0d0f2-cdce-4d4d-865e-803e517816d4", 00:18:41.325 "strip_size_kb": 64, 00:18:41.325 "state": "offline", 00:18:41.325 "raid_level": "raid0", 00:18:41.325 "superblock": false, 00:18:41.325 "num_base_bdevs": 4, 00:18:41.325 "num_base_bdevs_discovered": 3, 00:18:41.325 "num_base_bdevs_operational": 3, 00:18:41.325 "base_bdevs_list": [ 00:18:41.325 { 00:18:41.325 "name": null, 00:18:41.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.325 "is_configured": false, 00:18:41.325 "data_offset": 0, 00:18:41.325 "data_size": 65536 00:18:41.325 }, 00:18:41.325 { 00:18:41.325 "name": "BaseBdev2", 00:18:41.325 "uuid": "d5c828d8-6516-41da-a90c-7a5e081bb80c", 00:18:41.325 "is_configured": true, 00:18:41.325 "data_offset": 0, 00:18:41.325 "data_size": 65536 00:18:41.325 }, 00:18:41.325 { 00:18:41.325 "name": "BaseBdev3", 00:18:41.325 "uuid": "9386b20e-7b13-4696-b452-e05d5efe6677", 00:18:41.325 "is_configured": true, 00:18:41.325 "data_offset": 0, 00:18:41.325 "data_size": 65536 00:18:41.325 }, 00:18:41.325 { 00:18:41.325 "name": "BaseBdev4", 00:18:41.325 "uuid": "1bafb79e-fbcb-4e4b-a92e-c2aa4e469ac1", 00:18:41.325 "is_configured": true, 00:18:41.325 "data_offset": 0, 00:18:41.325 "data_size": 65536 00:18:41.325 } 00:18:41.325 ] 00:18:41.325 }' 00:18:41.325 22:26:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.325 22:26:40 -- common/autotest_common.sh@10 -- # set +x 00:18:41.889 22:26:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:41.889 22:26:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:41.889 22:26:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.889 22:26:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.145 22:26:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.145 22:26:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.145 22:26:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:42.402 [2024-07-14 22:26:41.230040] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.402 22:26:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:42.402 22:26:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.402 22:26:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.402 22:26:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.659 22:26:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.659 22:26:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.659 22:26:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:42.659 [2024-07-14 22:26:41.662432] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.916 22:26:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:43.173 [2024-07-14 22:26:42.177388] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:43.173 [2024-07-14 22:26:42.177465] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:43.431 22:26:42 -- bdev/bdev_raid.sh@287 -- # killprocess 121379 00:18:43.431 22:26:42 -- common/autotest_common.sh@926 -- # '[' -z 121379 ']' 00:18:43.431 22:26:42 -- common/autotest_common.sh@930 -- # kill -0 121379 00:18:43.431 22:26:42 -- common/autotest_common.sh@931 -- # uname 00:18:43.431 22:26:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:43.689 22:26:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121379 00:18:43.689 killing process with pid 121379 00:18:43.689 22:26:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:43.689 22:26:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:43.689 22:26:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121379' 00:18:43.689 22:26:42 -- common/autotest_common.sh@945 -- # kill 121379 00:18:43.689 22:26:42 -- common/autotest_common.sh@950 -- # wait 121379 00:18:43.689 [2024-07-14 22:26:42.509379] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.689 [2024-07-14 22:26:42.509505] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.624 ************************************ 00:18:44.624 END TEST raid_state_function_test 00:18:44.624 ************************************ 00:18:44.624 22:26:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:44.624 00:18:44.624 real 0m13.459s 00:18:44.624 user 0m24.162s 00:18:44.624 sys 0m1.532s 00:18:44.624 22:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.624 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:44.625 22:26:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:44.625 22:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.625 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:18:44.625 ************************************ 00:18:44.625 START TEST raid_state_function_test_sb 00:18:44.625 ************************************ 00:18:44.625 22:26:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=121822 00:18:44.625 Process raid pid: 121822 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121822' 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121822 /var/tmp/spdk-raid.sock 00:18:44.625 22:26:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:44.625 22:26:43 -- common/autotest_common.sh@819 -- # '[' -z 121822 ']' 00:18:44.625 22:26:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:44.625 22:26:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.625 22:26:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:44.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:44.625 22:26:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.625 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:18:44.625 [2024-07-14 22:26:43.572146] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.625 [2024-07-14 22:26:43.572352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.882 [2024-07-14 22:26:43.744752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.882 [2024-07-14 22:26:43.899740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.139 [2024-07-14 22:26:44.064458] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.397 22:26:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:45.397 22:26:44 -- common/autotest_common.sh@852 -- # return 0 00:18:45.397 22:26:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:45.655 [2024-07-14 22:26:44.622363] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:45.655 [2024-07-14 22:26:44.622433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:45.655 [2024-07-14 22:26:44.622445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.655 [2024-07-14 22:26:44.622476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.655 [2024-07-14 22:26:44.622482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:45.655 [2024-07-14 22:26:44.622516] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.655 [2024-07-14 22:26:44.622524] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:45.655 [2024-07-14 22:26:44.622588] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.655 22:26:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.913 22:26:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.913 "name": "Existed_Raid", 00:18:45.913 "uuid": "bac78737-59a8-4a7d-9423-4282a07b6a59", 00:18:45.913 "strip_size_kb": 64, 00:18:45.913 "state": "configuring", 00:18:45.913 "raid_level": "raid0", 00:18:45.913 "superblock": true, 00:18:45.913 "num_base_bdevs": 4, 00:18:45.913 "num_base_bdevs_discovered": 0, 00:18:45.913 "num_base_bdevs_operational": 4, 00:18:45.913 "base_bdevs_list": [ 00:18:45.913 { 00:18:45.913 "name": "BaseBdev1", 00:18:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.913 "is_configured": false, 00:18:45.913 "data_offset": 0, 00:18:45.913 "data_size": 0 00:18:45.913 }, 00:18:45.913 { 00:18:45.913 "name": "BaseBdev2", 00:18:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.913 "is_configured": false, 00:18:45.913 "data_offset": 0, 00:18:45.913 "data_size": 0 00:18:45.913 }, 00:18:45.913 { 00:18:45.913 "name": "BaseBdev3", 00:18:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.913 "is_configured": false, 00:18:45.913 "data_offset": 0, 00:18:45.913 "data_size": 0 00:18:45.913 }, 00:18:45.913 { 00:18:45.913 "name": "BaseBdev4", 00:18:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.913 "is_configured": false, 00:18:45.913 "data_offset": 0, 00:18:45.913 "data_size": 0 00:18:45.913 } 00:18:45.913 ] 00:18:45.913 }' 00:18:45.913 22:26:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.913 22:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.477 22:26:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:46.736 [2024-07-14 22:26:45.658604] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.736 [2024-07-14 22:26:45.658739] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:46.736 22:26:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:46.994 [2024-07-14 22:26:45.918706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.994 [2024-07-14 22:26:45.918922] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.994 [2024-07-14 22:26:45.919019] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.994 [2024-07-14 22:26:45.919094] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.994 [2024-07-14 22:26:45.919209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.994 [2024-07-14 22:26:45.919297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.994 [2024-07-14 22:26:45.919323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:46.994 [2024-07-14 22:26:45.919478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:46.994 22:26:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.253 [2024-07-14 22:26:46.147644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.253 BaseBdev1 00:18:47.253 22:26:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:47.253 22:26:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:47.253 22:26:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.253 22:26:46 -- common/autotest_common.sh@889 -- # local i 00:18:47.253 22:26:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.253 22:26:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.253 22:26:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.518 22:26:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:47.518 [ 00:18:47.518 { 00:18:47.518 "name": "BaseBdev1", 00:18:47.518 "aliases": [ 00:18:47.518 "13bdbdb3-432c-41bd-b523-94b5acc5a090" 00:18:47.518 ], 00:18:47.518 "product_name": "Malloc disk", 00:18:47.518 "block_size": 512, 00:18:47.518 "num_blocks": 65536, 00:18:47.518 "uuid": "13bdbdb3-432c-41bd-b523-94b5acc5a090", 00:18:47.518 "assigned_rate_limits": { 00:18:47.518 "rw_ios_per_sec": 0, 00:18:47.518 "rw_mbytes_per_sec": 0, 00:18:47.518 "r_mbytes_per_sec": 0, 00:18:47.518 "w_mbytes_per_sec": 0 00:18:47.518 }, 00:18:47.518 "claimed": true, 00:18:47.518 "claim_type": "exclusive_write", 00:18:47.518 "zoned": false, 00:18:47.518 "supported_io_types": { 00:18:47.518 "read": true, 00:18:47.518 "write": true, 00:18:47.518 "unmap": true, 00:18:47.518 "write_zeroes": true, 00:18:47.518 "flush": true, 00:18:47.518 "reset": true, 00:18:47.518 "compare": false, 00:18:47.518 "compare_and_write": false, 00:18:47.518 "abort": true, 00:18:47.518 "nvme_admin": false, 00:18:47.518 "nvme_io": false 00:18:47.518 }, 00:18:47.518 "memory_domains": [ 00:18:47.518 { 00:18:47.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.518 "dma_device_type": 2 00:18:47.518 } 00:18:47.518 ], 00:18:47.518 "driver_specific": {} 00:18:47.518 } 00:18:47.518 ] 00:18:47.518 22:26:46 -- common/autotest_common.sh@895 -- # return 0 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.518 22:26:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.082 22:26:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.082 "name": "Existed_Raid", 00:18:48.082 "uuid": "2e9421d8-b503-447f-b64a-9b851130c764", 00:18:48.082 "strip_size_kb": 64, 00:18:48.082 "state": "configuring", 00:18:48.082 "raid_level": "raid0", 00:18:48.082 "superblock": true, 00:18:48.082 "num_base_bdevs": 4, 00:18:48.082 "num_base_bdevs_discovered": 1, 00:18:48.082 "num_base_bdevs_operational": 4, 00:18:48.082 "base_bdevs_list": [ 00:18:48.082 { 00:18:48.082 "name": "BaseBdev1", 00:18:48.082 "uuid": "13bdbdb3-432c-41bd-b523-94b5acc5a090", 00:18:48.082 "is_configured": true, 00:18:48.082 "data_offset": 2048, 00:18:48.082 "data_size": 63488 00:18:48.082 }, 00:18:48.082 { 00:18:48.082 "name": "BaseBdev2", 00:18:48.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.082 "is_configured": false, 00:18:48.082 "data_offset": 0, 00:18:48.082 "data_size": 0 00:18:48.082 }, 00:18:48.082 { 00:18:48.082 "name": "BaseBdev3", 00:18:48.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.082 "is_configured": false, 00:18:48.082 "data_offset": 0, 00:18:48.082 "data_size": 0 00:18:48.082 }, 00:18:48.082 { 00:18:48.082 "name": "BaseBdev4", 00:18:48.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.082 "is_configured": false, 00:18:48.083 "data_offset": 0, 00:18:48.083 "data_size": 0 00:18:48.083 } 00:18:48.083 ] 00:18:48.083 }' 00:18:48.083 22:26:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.083 22:26:46 -- common/autotest_common.sh@10 -- # set +x 00:18:48.647 22:26:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:48.647 [2024-07-14 22:26:47.650605] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.647 [2024-07-14 22:26:47.650751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:48.647 22:26:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:48.647 22:26:47 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:48.905 22:26:47 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.162 BaseBdev1 00:18:49.162 22:26:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:49.162 22:26:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:49.162 22:26:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:49.162 22:26:48 -- common/autotest_common.sh@889 -- # local i 00:18:49.162 22:26:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:49.162 22:26:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:49.162 22:26:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.420 22:26:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:49.678 [ 00:18:49.678 { 00:18:49.678 "name": "BaseBdev1", 00:18:49.678 "aliases": [ 00:18:49.678 "c97d4ae6-5a21-4cb8-9b43-d8e699df4994" 00:18:49.678 ], 00:18:49.678 "product_name": "Malloc disk", 00:18:49.678 "block_size": 512, 00:18:49.678 "num_blocks": 65536, 00:18:49.678 "uuid": "c97d4ae6-5a21-4cb8-9b43-d8e699df4994", 00:18:49.678 "assigned_rate_limits": { 00:18:49.678 "rw_ios_per_sec": 0, 00:18:49.678 "rw_mbytes_per_sec": 0, 00:18:49.678 "r_mbytes_per_sec": 0, 00:18:49.678 "w_mbytes_per_sec": 0 00:18:49.678 }, 00:18:49.678 "claimed": false, 00:18:49.678 "zoned": false, 00:18:49.678 "supported_io_types": { 00:18:49.678 "read": true, 00:18:49.678 "write": true, 00:18:49.678 "unmap": true, 00:18:49.678 "write_zeroes": true, 00:18:49.678 "flush": true, 00:18:49.678 "reset": true, 00:18:49.678 "compare": false, 00:18:49.678 "compare_and_write": false, 00:18:49.678 "abort": true, 00:18:49.678 "nvme_admin": false, 00:18:49.678 "nvme_io": false 00:18:49.678 }, 00:18:49.678 "memory_domains": [ 00:18:49.678 { 00:18:49.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.678 "dma_device_type": 2 00:18:49.678 } 00:18:49.678 ], 00:18:49.678 "driver_specific": {} 00:18:49.678 } 00:18:49.678 ] 00:18:49.678 22:26:48 -- common/autotest_common.sh@895 -- # return 0 00:18:49.678 22:26:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:49.936 [2024-07-14 22:26:48.767458] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.936 [2024-07-14 22:26:48.769145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.936 [2024-07-14 22:26:48.769313] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.936 [2024-07-14 22:26:48.769404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:49.936 [2024-07-14 22:26:48.769457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:49.936 [2024-07-14 22:26:48.769483] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:49.936 [2024-07-14 22:26:48.769582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.936 22:26:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.194 22:26:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.194 "name": "Existed_Raid", 00:18:50.194 "uuid": "ac59050d-9a32-40cd-8503-4ea9dcb96a7c", 00:18:50.194 "strip_size_kb": 64, 00:18:50.194 "state": "configuring", 00:18:50.194 "raid_level": "raid0", 00:18:50.194 "superblock": true, 00:18:50.194 "num_base_bdevs": 4, 00:18:50.194 "num_base_bdevs_discovered": 1, 00:18:50.194 "num_base_bdevs_operational": 4, 00:18:50.194 "base_bdevs_list": [ 00:18:50.194 { 00:18:50.194 "name": "BaseBdev1", 00:18:50.194 "uuid": "c97d4ae6-5a21-4cb8-9b43-d8e699df4994", 00:18:50.194 "is_configured": true, 00:18:50.194 "data_offset": 2048, 00:18:50.194 "data_size": 63488 00:18:50.194 }, 00:18:50.194 { 00:18:50.194 "name": "BaseBdev2", 00:18:50.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.194 "is_configured": false, 00:18:50.194 "data_offset": 0, 00:18:50.194 "data_size": 0 00:18:50.194 }, 00:18:50.194 { 00:18:50.194 "name": "BaseBdev3", 00:18:50.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.194 "is_configured": false, 00:18:50.194 "data_offset": 0, 00:18:50.194 "data_size": 0 00:18:50.194 }, 00:18:50.194 { 00:18:50.194 "name": "BaseBdev4", 00:18:50.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.194 "is_configured": false, 00:18:50.194 "data_offset": 0, 00:18:50.194 "data_size": 0 00:18:50.194 } 00:18:50.194 ] 00:18:50.194 }' 00:18:50.194 22:26:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.195 22:26:49 -- common/autotest_common.sh@10 -- # set +x 00:18:50.760 22:26:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.019 [2024-07-14 22:26:49.872041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.019 BaseBdev2 00:18:51.019 22:26:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:51.019 22:26:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:51.019 22:26:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:51.019 22:26:49 -- common/autotest_common.sh@889 -- # local i 00:18:51.019 22:26:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:51.019 22:26:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:51.019 22:26:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.019 22:26:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.277 [ 00:18:51.277 { 00:18:51.277 "name": "BaseBdev2", 00:18:51.277 "aliases": [ 00:18:51.277 "1339a31f-b8ce-4707-b3f1-e4ef90157318" 00:18:51.277 ], 00:18:51.277 "product_name": "Malloc disk", 00:18:51.277 "block_size": 512, 00:18:51.277 "num_blocks": 65536, 00:18:51.277 "uuid": "1339a31f-b8ce-4707-b3f1-e4ef90157318", 00:18:51.277 "assigned_rate_limits": { 00:18:51.277 "rw_ios_per_sec": 0, 00:18:51.277 "rw_mbytes_per_sec": 0, 00:18:51.277 "r_mbytes_per_sec": 0, 00:18:51.277 "w_mbytes_per_sec": 0 00:18:51.277 }, 00:18:51.277 "claimed": true, 00:18:51.277 "claim_type": "exclusive_write", 00:18:51.277 "zoned": false, 00:18:51.277 "supported_io_types": { 00:18:51.277 "read": true, 00:18:51.277 "write": true, 00:18:51.277 "unmap": true, 00:18:51.277 "write_zeroes": true, 00:18:51.277 "flush": true, 00:18:51.277 "reset": true, 00:18:51.277 "compare": false, 00:18:51.277 "compare_and_write": false, 00:18:51.277 "abort": true, 00:18:51.277 "nvme_admin": false, 00:18:51.277 "nvme_io": false 00:18:51.277 }, 00:18:51.277 "memory_domains": [ 00:18:51.277 { 00:18:51.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.278 "dma_device_type": 2 00:18:51.278 } 00:18:51.278 ], 00:18:51.278 "driver_specific": {} 00:18:51.278 } 00:18:51.278 ] 00:18:51.278 22:26:50 -- common/autotest_common.sh@895 -- # return 0 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.278 22:26:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.536 22:26:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.536 "name": "Existed_Raid", 00:18:51.536 "uuid": "ac59050d-9a32-40cd-8503-4ea9dcb96a7c", 00:18:51.536 "strip_size_kb": 64, 00:18:51.536 "state": "configuring", 00:18:51.536 "raid_level": "raid0", 00:18:51.536 "superblock": true, 00:18:51.536 "num_base_bdevs": 4, 00:18:51.536 "num_base_bdevs_discovered": 2, 00:18:51.536 "num_base_bdevs_operational": 4, 00:18:51.536 "base_bdevs_list": [ 00:18:51.536 { 00:18:51.536 "name": "BaseBdev1", 00:18:51.536 "uuid": "c97d4ae6-5a21-4cb8-9b43-d8e699df4994", 00:18:51.536 "is_configured": true, 00:18:51.536 "data_offset": 2048, 00:18:51.536 "data_size": 63488 00:18:51.536 }, 00:18:51.536 { 00:18:51.536 "name": "BaseBdev2", 00:18:51.536 "uuid": "1339a31f-b8ce-4707-b3f1-e4ef90157318", 00:18:51.536 "is_configured": true, 00:18:51.536 "data_offset": 2048, 00:18:51.536 "data_size": 63488 00:18:51.536 }, 00:18:51.536 { 00:18:51.536 "name": "BaseBdev3", 00:18:51.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.536 "is_configured": false, 00:18:51.536 "data_offset": 0, 00:18:51.536 "data_size": 0 00:18:51.536 }, 00:18:51.536 { 00:18:51.536 "name": "BaseBdev4", 00:18:51.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.536 "is_configured": false, 00:18:51.536 "data_offset": 0, 00:18:51.536 "data_size": 0 00:18:51.536 } 00:18:51.536 ] 00:18:51.536 }' 00:18:51.536 22:26:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.536 22:26:50 -- common/autotest_common.sh@10 -- # set +x 00:18:52.469 22:26:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.469 [2024-07-14 22:26:51.396177] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.469 BaseBdev3 00:18:52.469 22:26:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:52.469 22:26:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:52.469 22:26:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:52.469 22:26:51 -- common/autotest_common.sh@889 -- # local i 00:18:52.469 22:26:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:52.469 22:26:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:52.469 22:26:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.726 22:26:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.726 [ 00:18:52.726 { 00:18:52.726 "name": "BaseBdev3", 00:18:52.726 "aliases": [ 00:18:52.726 "e0d80e04-ff42-41fb-a1fa-75a10cc32665" 00:18:52.726 ], 00:18:52.726 "product_name": "Malloc disk", 00:18:52.726 "block_size": 512, 00:18:52.726 "num_blocks": 65536, 00:18:52.726 "uuid": "e0d80e04-ff42-41fb-a1fa-75a10cc32665", 00:18:52.726 "assigned_rate_limits": { 00:18:52.726 "rw_ios_per_sec": 0, 00:18:52.726 "rw_mbytes_per_sec": 0, 00:18:52.726 "r_mbytes_per_sec": 0, 00:18:52.726 "w_mbytes_per_sec": 0 00:18:52.726 }, 00:18:52.726 "claimed": true, 00:18:52.726 "claim_type": "exclusive_write", 00:18:52.726 "zoned": false, 00:18:52.726 "supported_io_types": { 00:18:52.726 "read": true, 00:18:52.726 "write": true, 00:18:52.726 "unmap": true, 00:18:52.726 "write_zeroes": true, 00:18:52.726 "flush": true, 00:18:52.726 "reset": true, 00:18:52.726 "compare": false, 00:18:52.726 "compare_and_write": false, 00:18:52.726 "abort": true, 00:18:52.726 "nvme_admin": false, 00:18:52.726 "nvme_io": false 00:18:52.726 }, 00:18:52.726 "memory_domains": [ 00:18:52.726 { 00:18:52.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.726 "dma_device_type": 2 00:18:52.726 } 00:18:52.726 ], 00:18:52.726 "driver_specific": {} 00:18:52.726 } 00:18:52.726 ] 00:18:52.984 22:26:51 -- common/autotest_common.sh@895 -- # return 0 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:52.984 22:26:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.985 22:26:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.242 22:26:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.242 "name": "Existed_Raid", 00:18:53.242 "uuid": "ac59050d-9a32-40cd-8503-4ea9dcb96a7c", 00:18:53.242 "strip_size_kb": 64, 00:18:53.242 "state": "configuring", 00:18:53.242 "raid_level": "raid0", 00:18:53.243 "superblock": true, 00:18:53.243 "num_base_bdevs": 4, 00:18:53.243 "num_base_bdevs_discovered": 3, 00:18:53.243 "num_base_bdevs_operational": 4, 00:18:53.243 "base_bdevs_list": [ 00:18:53.243 { 00:18:53.243 "name": "BaseBdev1", 00:18:53.243 "uuid": "c97d4ae6-5a21-4cb8-9b43-d8e699df4994", 00:18:53.243 "is_configured": true, 00:18:53.243 "data_offset": 2048, 00:18:53.243 "data_size": 63488 00:18:53.243 }, 00:18:53.243 { 00:18:53.243 "name": "BaseBdev2", 00:18:53.243 "uuid": "1339a31f-b8ce-4707-b3f1-e4ef90157318", 00:18:53.243 "is_configured": true, 00:18:53.243 "data_offset": 2048, 00:18:53.243 "data_size": 63488 00:18:53.243 }, 00:18:53.243 { 00:18:53.243 "name": "BaseBdev3", 00:18:53.243 "uuid": "e0d80e04-ff42-41fb-a1fa-75a10cc32665", 00:18:53.243 "is_configured": true, 00:18:53.243 "data_offset": 2048, 00:18:53.243 "data_size": 63488 00:18:53.243 }, 00:18:53.243 { 00:18:53.243 "name": "BaseBdev4", 00:18:53.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.243 "is_configured": false, 00:18:53.243 "data_offset": 0, 00:18:53.243 "data_size": 0 00:18:53.243 } 00:18:53.243 ] 00:18:53.243 }' 00:18:53.243 22:26:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.243 22:26:52 -- common/autotest_common.sh@10 -- # set +x 00:18:53.808 22:26:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:54.067 [2024-07-14 22:26:52.952250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.067 [2024-07-14 22:26:52.952475] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:54.067 [2024-07-14 22:26:52.952490] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:54.067 BaseBdev4 00:18:54.067 [2024-07-14 22:26:52.952632] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:54.067 [2024-07-14 22:26:52.952938] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:54.067 [2024-07-14 22:26:52.952959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:54.067 [2024-07-14 22:26:52.953084] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.067 22:26:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:54.067 22:26:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:54.067 22:26:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:54.067 22:26:52 -- common/autotest_common.sh@889 -- # local i 00:18:54.067 22:26:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:54.067 22:26:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:54.067 22:26:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.344 22:26:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:54.344 [ 00:18:54.344 { 00:18:54.344 "name": "BaseBdev4", 00:18:54.344 "aliases": [ 00:18:54.344 "98e26057-375c-4d77-8819-eb714c46a4cf" 00:18:54.344 ], 00:18:54.344 "product_name": "Malloc disk", 00:18:54.344 "block_size": 512, 00:18:54.344 "num_blocks": 65536, 00:18:54.344 "uuid": "98e26057-375c-4d77-8819-eb714c46a4cf", 00:18:54.344 "assigned_rate_limits": { 00:18:54.344 "rw_ios_per_sec": 0, 00:18:54.344 "rw_mbytes_per_sec": 0, 00:18:54.344 "r_mbytes_per_sec": 0, 00:18:54.344 "w_mbytes_per_sec": 0 00:18:54.344 }, 00:18:54.344 "claimed": true, 00:18:54.344 "claim_type": "exclusive_write", 00:18:54.344 "zoned": false, 00:18:54.344 "supported_io_types": { 00:18:54.344 "read": true, 00:18:54.344 "write": true, 00:18:54.344 "unmap": true, 00:18:54.344 "write_zeroes": true, 00:18:54.344 "flush": true, 00:18:54.344 "reset": true, 00:18:54.344 "compare": false, 00:18:54.344 "compare_and_write": false, 00:18:54.344 "abort": true, 00:18:54.344 "nvme_admin": false, 00:18:54.344 "nvme_io": false 00:18:54.344 }, 00:18:54.344 "memory_domains": [ 00:18:54.344 { 00:18:54.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.344 "dma_device_type": 2 00:18:54.344 } 00:18:54.344 ], 00:18:54.344 "driver_specific": {} 00:18:54.344 } 00:18:54.344 ] 00:18:54.344 22:26:53 -- common/autotest_common.sh@895 -- # return 0 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.344 22:26:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.603 22:26:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.603 "name": "Existed_Raid", 00:18:54.603 "uuid": "ac59050d-9a32-40cd-8503-4ea9dcb96a7c", 00:18:54.603 "strip_size_kb": 64, 00:18:54.603 "state": "online", 00:18:54.603 "raid_level": "raid0", 00:18:54.603 "superblock": true, 00:18:54.603 "num_base_bdevs": 4, 00:18:54.603 "num_base_bdevs_discovered": 4, 00:18:54.603 "num_base_bdevs_operational": 4, 00:18:54.603 "base_bdevs_list": [ 00:18:54.603 { 00:18:54.603 "name": "BaseBdev1", 00:18:54.603 "uuid": "c97d4ae6-5a21-4cb8-9b43-d8e699df4994", 00:18:54.603 "is_configured": true, 00:18:54.603 "data_offset": 2048, 00:18:54.603 "data_size": 63488 00:18:54.603 }, 00:18:54.603 { 00:18:54.603 "name": "BaseBdev2", 00:18:54.603 "uuid": "1339a31f-b8ce-4707-b3f1-e4ef90157318", 00:18:54.603 "is_configured": true, 00:18:54.603 "data_offset": 2048, 00:18:54.603 "data_size": 63488 00:18:54.603 }, 00:18:54.603 { 00:18:54.603 "name": "BaseBdev3", 00:18:54.603 "uuid": "e0d80e04-ff42-41fb-a1fa-75a10cc32665", 00:18:54.603 "is_configured": true, 00:18:54.603 "data_offset": 2048, 00:18:54.603 "data_size": 63488 00:18:54.603 }, 00:18:54.603 { 00:18:54.603 "name": "BaseBdev4", 00:18:54.603 "uuid": "98e26057-375c-4d77-8819-eb714c46a4cf", 00:18:54.603 "is_configured": true, 00:18:54.603 "data_offset": 2048, 00:18:54.603 "data_size": 63488 00:18:54.603 } 00:18:54.603 ] 00:18:54.603 }' 00:18:54.603 22:26:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.603 22:26:53 -- common/autotest_common.sh@10 -- # set +x 00:18:55.168 22:26:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:55.426 [2024-07-14 22:26:54.412610] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.426 [2024-07-14 22:26:54.412638] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.426 [2024-07-14 22:26:54.412701] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.426 22:26:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.992 22:26:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.992 "name": "Existed_Raid", 00:18:55.992 "uuid": "ac59050d-9a32-40cd-8503-4ea9dcb96a7c", 00:18:55.992 "strip_size_kb": 64, 00:18:55.992 "state": "offline", 00:18:55.992 "raid_level": "raid0", 00:18:55.992 "superblock": true, 00:18:55.992 "num_base_bdevs": 4, 00:18:55.992 "num_base_bdevs_discovered": 3, 00:18:55.992 "num_base_bdevs_operational": 3, 00:18:55.992 "base_bdevs_list": [ 00:18:55.992 { 00:18:55.992 "name": null, 00:18:55.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.992 "is_configured": false, 00:18:55.992 "data_offset": 2048, 00:18:55.992 "data_size": 63488 00:18:55.992 }, 00:18:55.992 { 00:18:55.992 "name": "BaseBdev2", 00:18:55.992 "uuid": "1339a31f-b8ce-4707-b3f1-e4ef90157318", 00:18:55.992 "is_configured": true, 00:18:55.992 "data_offset": 2048, 00:18:55.992 "data_size": 63488 00:18:55.992 }, 00:18:55.992 { 00:18:55.992 "name": "BaseBdev3", 00:18:55.992 "uuid": "e0d80e04-ff42-41fb-a1fa-75a10cc32665", 00:18:55.992 "is_configured": true, 00:18:55.992 "data_offset": 2048, 00:18:55.992 "data_size": 63488 00:18:55.992 }, 00:18:55.992 { 00:18:55.992 "name": "BaseBdev4", 00:18:55.992 "uuid": "98e26057-375c-4d77-8819-eb714c46a4cf", 00:18:55.992 "is_configured": true, 00:18:55.992 "data_offset": 2048, 00:18:55.992 "data_size": 63488 00:18:55.992 } 00:18:55.992 ] 00:18:55.992 }' 00:18:55.992 22:26:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.992 22:26:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.559 22:26:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:56.817 [2024-07-14 22:26:55.876304] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.074 22:26:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.074 22:26:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.074 22:26:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.074 22:26:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.332 22:26:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.332 22:26:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.332 22:26:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:57.332 [2024-07-14 22:26:56.375541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.591 22:26:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.591 22:26:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.591 22:26:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.591 22:26:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.849 22:26:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.849 22:26:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.849 22:26:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:58.107 [2024-07-14 22:26:56.913152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:58.107 [2024-07-14 22:26:56.913223] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:58.107 22:26:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.107 22:26:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.107 22:26:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.107 22:26:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.364 22:26:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:58.365 22:26:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:58.365 22:26:57 -- bdev/bdev_raid.sh@287 -- # killprocess 121822 00:18:58.365 22:26:57 -- common/autotest_common.sh@926 -- # '[' -z 121822 ']' 00:18:58.365 22:26:57 -- common/autotest_common.sh@930 -- # kill -0 121822 00:18:58.365 22:26:57 -- common/autotest_common.sh@931 -- # uname 00:18:58.365 22:26:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:58.365 22:26:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121822 00:18:58.365 killing process with pid 121822 00:18:58.365 22:26:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:58.365 22:26:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:58.365 22:26:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121822' 00:18:58.365 22:26:57 -- common/autotest_common.sh@945 -- # kill 121822 00:18:58.365 22:26:57 -- common/autotest_common.sh@950 -- # wait 121822 00:18:58.365 [2024-07-14 22:26:57.265986] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.365 [2024-07-14 22:26:57.266123] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.318 ************************************ 00:18:59.318 END TEST raid_state_function_test_sb 00:18:59.318 ************************************ 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:59.318 00:18:59.318 real 0m14.699s 00:18:59.318 user 0m26.261s 00:18:59.318 sys 0m1.826s 00:18:59.318 22:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.318 22:26:58 -- common/autotest_common.sh@10 -- # set +x 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:59.318 22:26:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:59.318 22:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.318 22:26:58 -- common/autotest_common.sh@10 -- # set +x 00:18:59.318 ************************************ 00:18:59.318 START TEST raid_superblock_test 00:18:59.318 ************************************ 00:18:59.318 22:26:58 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@357 -- # raid_pid=122317 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122317 /var/tmp/spdk-raid.sock 00:18:59.318 22:26:58 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:59.318 22:26:58 -- common/autotest_common.sh@819 -- # '[' -z 122317 ']' 00:18:59.318 22:26:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.318 22:26:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.318 22:26:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.318 22:26:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.318 22:26:58 -- common/autotest_common.sh@10 -- # set +x 00:18:59.318 [2024-07-14 22:26:58.316075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:59.318 [2024-07-14 22:26:58.316305] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122317 ] 00:18:59.590 [2024-07-14 22:26:58.479132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.590 [2024-07-14 22:26:58.639086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.848 [2024-07-14 22:26:58.809350] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.413 22:26:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.413 22:26:59 -- common/autotest_common.sh@852 -- # return 0 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.413 22:26:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.414 22:26:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:00.672 malloc1 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:00.672 [2024-07-14 22:26:59.710991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:00.672 [2024-07-14 22:26:59.711088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.672 [2024-07-14 22:26:59.711119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:00.672 [2024-07-14 22:26:59.711161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.672 [2024-07-14 22:26:59.713155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.672 [2024-07-14 22:26:59.713216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:00.672 pt1 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.672 22:26:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:00.930 malloc2 00:19:00.931 22:26:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.189 [2024-07-14 22:27:00.182187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.189 [2024-07-14 22:27:00.182255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.189 [2024-07-14 22:27:00.182293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:01.189 [2024-07-14 22:27:00.182340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.189 [2024-07-14 22:27:00.184551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.189 [2024-07-14 22:27:00.184599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.189 pt2 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.189 22:27:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:01.447 malloc3 00:19:01.447 22:27:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:01.706 [2024-07-14 22:27:00.602790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:01.706 [2024-07-14 22:27:00.602884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.706 [2024-07-14 22:27:00.602923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:01.706 [2024-07-14 22:27:00.602963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.706 [2024-07-14 22:27:00.605035] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.706 [2024-07-14 22:27:00.605101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:01.706 pt3 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.706 22:27:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:01.965 malloc4 00:19:01.965 22:27:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:01.965 [2024-07-14 22:27:01.015140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:01.965 [2024-07-14 22:27:01.015218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.965 [2024-07-14 22:27:01.015283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:01.965 [2024-07-14 22:27:01.015321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.965 [2024-07-14 22:27:01.017286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.965 [2024-07-14 22:27:01.017349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:01.965 pt4 00:19:01.965 22:27:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.965 22:27:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.965 22:27:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:02.222 [2024-07-14 22:27:01.219293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.222 [2024-07-14 22:27:01.220978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.222 [2024-07-14 22:27:01.221050] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:02.222 [2024-07-14 22:27:01.221124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:02.222 [2024-07-14 22:27:01.221371] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:02.222 [2024-07-14 22:27:01.221395] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:02.222 [2024-07-14 22:27:01.221510] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:02.222 [2024-07-14 22:27:01.221896] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:02.222 [2024-07-14 22:27:01.221935] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:02.222 [2024-07-14 22:27:01.222083] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.222 22:27:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.479 22:27:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.479 "name": "raid_bdev1", 00:19:02.479 "uuid": "18f9e4ef-0433-4f91-83f7-ff5696edf4ce", 00:19:02.479 "strip_size_kb": 64, 00:19:02.479 "state": "online", 00:19:02.479 "raid_level": "raid0", 00:19:02.479 "superblock": true, 00:19:02.479 "num_base_bdevs": 4, 00:19:02.479 "num_base_bdevs_discovered": 4, 00:19:02.479 "num_base_bdevs_operational": 4, 00:19:02.479 "base_bdevs_list": [ 00:19:02.479 { 00:19:02.479 "name": "pt1", 00:19:02.479 "uuid": "eeb19a15-e82c-52f1-b75f-5dc57571443e", 00:19:02.479 "is_configured": true, 00:19:02.479 "data_offset": 2048, 00:19:02.479 "data_size": 63488 00:19:02.479 }, 00:19:02.479 { 00:19:02.479 "name": "pt2", 00:19:02.479 "uuid": "d42bd225-8d14-5cbe-99fb-0a807327d1bb", 00:19:02.479 "is_configured": true, 00:19:02.479 "data_offset": 2048, 00:19:02.479 "data_size": 63488 00:19:02.479 }, 00:19:02.479 { 00:19:02.479 "name": "pt3", 00:19:02.479 "uuid": "2ab897d7-77fd-5fbd-9c67-efc366cbd8c8", 00:19:02.479 "is_configured": true, 00:19:02.479 "data_offset": 2048, 00:19:02.479 "data_size": 63488 00:19:02.479 }, 00:19:02.479 { 00:19:02.479 "name": "pt4", 00:19:02.479 "uuid": "3363b3f0-d8df-55c4-852b-f14f8fb854af", 00:19:02.479 "is_configured": true, 00:19:02.479 "data_offset": 2048, 00:19:02.479 "data_size": 63488 00:19:02.479 } 00:19:02.479 ] 00:19:02.479 }' 00:19:02.479 22:27:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.479 22:27:01 -- common/autotest_common.sh@10 -- # set +x 00:19:03.044 22:27:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.044 22:27:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:03.301 [2024-07-14 22:27:02.291610] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.301 22:27:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=18f9e4ef-0433-4f91-83f7-ff5696edf4ce 00:19:03.301 22:27:02 -- bdev/bdev_raid.sh@380 -- # '[' -z 18f9e4ef-0433-4f91-83f7-ff5696edf4ce ']' 00:19:03.301 22:27:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:03.560 [2024-07-14 22:27:02.479420] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.560 [2024-07-14 22:27:02.479444] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.560 [2024-07-14 22:27:02.479508] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.560 [2024-07-14 22:27:02.479564] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.560 [2024-07-14 22:27:02.479573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:03.560 22:27:02 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.560 22:27:02 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:03.817 22:27:02 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:03.817 22:27:02 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:03.818 22:27:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.818 22:27:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:04.076 22:27:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.076 22:27:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:04.076 22:27:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.076 22:27:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:04.335 22:27:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.335 22:27:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:04.594 22:27:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:04.594 22:27:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:04.852 22:27:03 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:04.852 22:27:03 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:04.852 22:27:03 -- common/autotest_common.sh@640 -- # local es=0 00:19:04.852 22:27:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:04.852 22:27:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.852 22:27:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:04.852 22:27:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.852 22:27:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:04.852 22:27:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.852 22:27:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:04.852 22:27:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.852 22:27:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:04.852 22:27:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:04.852 [2024-07-14 22:27:03.871649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:04.852 [2024-07-14 22:27:03.873525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:04.852 [2024-07-14 22:27:03.873577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:04.852 [2024-07-14 22:27:03.873622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:04.852 [2024-07-14 22:27:03.873716] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:04.852 [2024-07-14 22:27:03.873786] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:04.852 [2024-07-14 22:27:03.873822] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:04.852 [2024-07-14 22:27:03.873906] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:04.852 [2024-07-14 22:27:03.873950] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.852 [2024-07-14 22:27:03.873960] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:04.852 request: 00:19:04.852 { 00:19:04.852 "name": "raid_bdev1", 00:19:04.852 "raid_level": "raid0", 00:19:04.852 "base_bdevs": [ 00:19:04.852 "malloc1", 00:19:04.852 "malloc2", 00:19:04.852 "malloc3", 00:19:04.852 "malloc4" 00:19:04.852 ], 00:19:04.852 "superblock": false, 00:19:04.852 "strip_size_kb": 64, 00:19:04.852 "method": "bdev_raid_create", 00:19:04.852 "req_id": 1 00:19:04.852 } 00:19:04.852 Got JSON-RPC error response 00:19:04.852 response: 00:19:04.852 { 00:19:04.852 "code": -17, 00:19:04.852 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:04.852 } 00:19:04.852 22:27:03 -- common/autotest_common.sh@643 -- # es=1 00:19:04.852 22:27:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:04.852 22:27:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:04.852 22:27:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:04.852 22:27:03 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.852 22:27:03 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:05.110 22:27:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:05.110 22:27:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:05.110 22:27:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.368 [2024-07-14 22:27:04.275690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.368 [2024-07-14 22:27:04.275749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.368 [2024-07-14 22:27:04.275806] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:05.368 [2024-07-14 22:27:04.275831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.368 [2024-07-14 22:27:04.277914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.368 [2024-07-14 22:27:04.277977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.368 [2024-07-14 22:27:04.278091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:05.368 [2024-07-14 22:27:04.278157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.368 pt1 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.368 22:27:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.627 22:27:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.627 "name": "raid_bdev1", 00:19:05.627 "uuid": "18f9e4ef-0433-4f91-83f7-ff5696edf4ce", 00:19:05.627 "strip_size_kb": 64, 00:19:05.627 "state": "configuring", 00:19:05.627 "raid_level": "raid0", 00:19:05.627 "superblock": true, 00:19:05.627 "num_base_bdevs": 4, 00:19:05.627 "num_base_bdevs_discovered": 1, 00:19:05.627 "num_base_bdevs_operational": 4, 00:19:05.627 "base_bdevs_list": [ 00:19:05.627 { 00:19:05.627 "name": "pt1", 00:19:05.627 "uuid": "eeb19a15-e82c-52f1-b75f-5dc57571443e", 00:19:05.627 "is_configured": true, 00:19:05.627 "data_offset": 2048, 00:19:05.627 "data_size": 63488 00:19:05.627 }, 00:19:05.627 { 00:19:05.627 "name": null, 00:19:05.627 "uuid": "d42bd225-8d14-5cbe-99fb-0a807327d1bb", 00:19:05.627 "is_configured": false, 00:19:05.627 "data_offset": 2048, 00:19:05.627 "data_size": 63488 00:19:05.627 }, 00:19:05.627 { 00:19:05.627 "name": null, 00:19:05.627 "uuid": "2ab897d7-77fd-5fbd-9c67-efc366cbd8c8", 00:19:05.627 "is_configured": false, 00:19:05.627 "data_offset": 2048, 00:19:05.627 "data_size": 63488 00:19:05.627 }, 00:19:05.627 { 00:19:05.627 "name": null, 00:19:05.627 "uuid": "3363b3f0-d8df-55c4-852b-f14f8fb854af", 00:19:05.627 "is_configured": false, 00:19:05.627 "data_offset": 2048, 00:19:05.627 "data_size": 63488 00:19:05.627 } 00:19:05.627 ] 00:19:05.627 }' 00:19:05.627 22:27:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.627 22:27:04 -- common/autotest_common.sh@10 -- # set +x 00:19:06.194 22:27:05 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:06.194 22:27:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.452 [2024-07-14 22:27:05.288123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.452 [2024-07-14 22:27:05.288202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.452 [2024-07-14 22:27:05.288238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:06.452 [2024-07-14 22:27:05.288256] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.452 [2024-07-14 22:27:05.288762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.452 [2024-07-14 22:27:05.288832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.452 [2024-07-14 22:27:05.288982] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:06.452 [2024-07-14 22:27:05.289018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.452 pt2 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:06.452 [2024-07-14 22:27:05.476144] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.452 22:27:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.710 22:27:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.710 "name": "raid_bdev1", 00:19:06.710 "uuid": "18f9e4ef-0433-4f91-83f7-ff5696edf4ce", 00:19:06.710 "strip_size_kb": 64, 00:19:06.710 "state": "configuring", 00:19:06.710 "raid_level": "raid0", 00:19:06.710 "superblock": true, 00:19:06.710 "num_base_bdevs": 4, 00:19:06.710 "num_base_bdevs_discovered": 1, 00:19:06.710 "num_base_bdevs_operational": 4, 00:19:06.710 "base_bdevs_list": [ 00:19:06.710 { 00:19:06.710 "name": "pt1", 00:19:06.710 "uuid": "eeb19a15-e82c-52f1-b75f-5dc57571443e", 00:19:06.710 "is_configured": true, 00:19:06.710 "data_offset": 2048, 00:19:06.710 "data_size": 63488 00:19:06.710 }, 00:19:06.710 { 00:19:06.710 "name": null, 00:19:06.710 "uuid": "d42bd225-8d14-5cbe-99fb-0a807327d1bb", 00:19:06.710 "is_configured": false, 00:19:06.710 "data_offset": 2048, 00:19:06.710 "data_size": 63488 00:19:06.710 }, 00:19:06.710 { 00:19:06.710 "name": null, 00:19:06.710 "uuid": "2ab897d7-77fd-5fbd-9c67-efc366cbd8c8", 00:19:06.710 "is_configured": false, 00:19:06.710 "data_offset": 2048, 00:19:06.710 "data_size": 63488 00:19:06.710 }, 00:19:06.710 { 00:19:06.710 "name": null, 00:19:06.710 "uuid": "3363b3f0-d8df-55c4-852b-f14f8fb854af", 00:19:06.710 "is_configured": false, 00:19:06.710 "data_offset": 2048, 00:19:06.710 "data_size": 63488 00:19:06.710 } 00:19:06.710 ] 00:19:06.710 }' 00:19:06.710 22:27:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.710 22:27:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.278 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:07.278 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:07.278 22:27:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.536 [2024-07-14 22:27:06.497970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.536 [2024-07-14 22:27:06.498374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.536 [2024-07-14 22:27:06.498526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:07.536 [2024-07-14 22:27:06.498660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.536 [2024-07-14 22:27:06.499205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.536 [2024-07-14 22:27:06.499401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.536 [2024-07-14 22:27:06.499670] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:07.536 [2024-07-14 22:27:06.499717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.536 pt2 00:19:07.536 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:07.536 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:07.536 22:27:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:07.795 [2024-07-14 22:27:06.706013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:07.795 [2024-07-14 22:27:06.706161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.795 [2024-07-14 22:27:06.706315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:07.795 [2024-07-14 22:27:06.706430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.795 [2024-07-14 22:27:06.706919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.795 [2024-07-14 22:27:06.707095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:07.795 [2024-07-14 22:27:06.707317] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:07.795 [2024-07-14 22:27:06.707345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:07.795 pt3 00:19:07.795 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:07.795 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:07.795 22:27:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:08.053 [2024-07-14 22:27:06.890048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:08.053 [2024-07-14 22:27:06.890221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.053 [2024-07-14 22:27:06.890387] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:08.053 [2024-07-14 22:27:06.890520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.053 [2024-07-14 22:27:06.890991] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.053 [2024-07-14 22:27:06.891153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:08.053 [2024-07-14 22:27:06.891346] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:08.053 [2024-07-14 22:27:06.891429] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:08.053 [2024-07-14 22:27:06.891555] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:08.053 [2024-07-14 22:27:06.891567] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:08.053 [2024-07-14 22:27:06.891667] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:08.053 [2024-07-14 22:27:06.891998] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:08.053 [2024-07-14 22:27:06.892022] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:08.053 [2024-07-14 22:27:06.892147] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.053 pt4 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.053 22:27:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.312 22:27:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.312 "name": "raid_bdev1", 00:19:08.312 "uuid": "18f9e4ef-0433-4f91-83f7-ff5696edf4ce", 00:19:08.312 "strip_size_kb": 64, 00:19:08.312 "state": "online", 00:19:08.312 "raid_level": "raid0", 00:19:08.312 "superblock": true, 00:19:08.312 "num_base_bdevs": 4, 00:19:08.312 "num_base_bdevs_discovered": 4, 00:19:08.312 "num_base_bdevs_operational": 4, 00:19:08.312 "base_bdevs_list": [ 00:19:08.312 { 00:19:08.312 "name": "pt1", 00:19:08.312 "uuid": "eeb19a15-e82c-52f1-b75f-5dc57571443e", 00:19:08.312 "is_configured": true, 00:19:08.312 "data_offset": 2048, 00:19:08.312 "data_size": 63488 00:19:08.312 }, 00:19:08.312 { 00:19:08.312 "name": "pt2", 00:19:08.312 "uuid": "d42bd225-8d14-5cbe-99fb-0a807327d1bb", 00:19:08.312 "is_configured": true, 00:19:08.312 "data_offset": 2048, 00:19:08.312 "data_size": 63488 00:19:08.312 }, 00:19:08.312 { 00:19:08.312 "name": "pt3", 00:19:08.312 "uuid": "2ab897d7-77fd-5fbd-9c67-efc366cbd8c8", 00:19:08.312 "is_configured": true, 00:19:08.312 "data_offset": 2048, 00:19:08.312 "data_size": 63488 00:19:08.312 }, 00:19:08.312 { 00:19:08.312 "name": "pt4", 00:19:08.312 "uuid": "3363b3f0-d8df-55c4-852b-f14f8fb854af", 00:19:08.312 "is_configured": true, 00:19:08.312 "data_offset": 2048, 00:19:08.312 "data_size": 63488 00:19:08.312 } 00:19:08.312 ] 00:19:08.312 }' 00:19:08.312 22:27:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.312 22:27:07 -- common/autotest_common.sh@10 -- # set +x 00:19:08.878 22:27:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:08.878 22:27:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:09.137 [2024-07-14 22:27:07.954700] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.137 22:27:07 -- bdev/bdev_raid.sh@430 -- # '[' 18f9e4ef-0433-4f91-83f7-ff5696edf4ce '!=' 18f9e4ef-0433-4f91-83f7-ff5696edf4ce ']' 00:19:09.137 22:27:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:19:09.137 22:27:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:09.137 22:27:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:09.137 22:27:07 -- bdev/bdev_raid.sh@511 -- # killprocess 122317 00:19:09.137 22:27:07 -- common/autotest_common.sh@926 -- # '[' -z 122317 ']' 00:19:09.137 22:27:07 -- common/autotest_common.sh@930 -- # kill -0 122317 00:19:09.137 22:27:07 -- common/autotest_common.sh@931 -- # uname 00:19:09.137 22:27:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:09.137 22:27:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122317 00:19:09.137 killing process with pid 122317 00:19:09.137 22:27:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:09.137 22:27:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:09.137 22:27:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122317' 00:19:09.137 22:27:07 -- common/autotest_common.sh@945 -- # kill 122317 00:19:09.137 22:27:07 -- common/autotest_common.sh@950 -- # wait 122317 00:19:09.137 [2024-07-14 22:27:07.985684] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.137 [2024-07-14 22:27:07.985791] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.137 [2024-07-14 22:27:07.985889] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.137 [2024-07-14 22:27:07.985919] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:09.395 [2024-07-14 22:27:08.283811] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.329 ************************************ 00:19:10.329 END TEST raid_superblock_test 00:19:10.329 ************************************ 00:19:10.329 22:27:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:10.329 00:19:10.329 real 0m11.111s 00:19:10.329 user 0m19.232s 00:19:10.329 sys 0m1.298s 00:19:10.329 22:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.329 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:10.588 22:27:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:10.588 22:27:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:10.588 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.588 ************************************ 00:19:10.588 START TEST raid_state_function_test 00:19:10.588 ************************************ 00:19:10.588 22:27:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=122654 00:19:10.588 Process raid pid: 122654 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122654' 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122654 /var/tmp/spdk-raid.sock 00:19:10.588 22:27:09 -- common/autotest_common.sh@819 -- # '[' -z 122654 ']' 00:19:10.588 22:27:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:10.588 22:27:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:10.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:10.588 22:27:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:10.588 22:27:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:10.588 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.588 22:27:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:10.588 [2024-07-14 22:27:09.457686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:10.588 [2024-07-14 22:27:09.458022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.588 [2024-07-14 22:27:09.611059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.846 [2024-07-14 22:27:09.771514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.105 [2024-07-14 22:27:09.936781] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.364 22:27:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:11.364 22:27:10 -- common/autotest_common.sh@852 -- # return 0 00:19:11.364 22:27:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:11.622 [2024-07-14 22:27:10.629145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.622 [2024-07-14 22:27:10.629229] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.622 [2024-07-14 22:27:10.629242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.622 [2024-07-14 22:27:10.629262] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.622 [2024-07-14 22:27:10.629269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.622 [2024-07-14 22:27:10.629302] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.622 [2024-07-14 22:27:10.629326] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.622 [2024-07-14 22:27:10.629378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.622 22:27:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.623 22:27:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.623 22:27:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.623 22:27:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.881 22:27:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.881 "name": "Existed_Raid", 00:19:11.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.881 "strip_size_kb": 64, 00:19:11.881 "state": "configuring", 00:19:11.881 "raid_level": "concat", 00:19:11.881 "superblock": false, 00:19:11.881 "num_base_bdevs": 4, 00:19:11.881 "num_base_bdevs_discovered": 0, 00:19:11.881 "num_base_bdevs_operational": 4, 00:19:11.881 "base_bdevs_list": [ 00:19:11.881 { 00:19:11.881 "name": "BaseBdev1", 00:19:11.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.881 "is_configured": false, 00:19:11.881 "data_offset": 0, 00:19:11.881 "data_size": 0 00:19:11.881 }, 00:19:11.881 { 00:19:11.881 "name": "BaseBdev2", 00:19:11.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.881 "is_configured": false, 00:19:11.881 "data_offset": 0, 00:19:11.881 "data_size": 0 00:19:11.881 }, 00:19:11.881 { 00:19:11.881 "name": "BaseBdev3", 00:19:11.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.881 "is_configured": false, 00:19:11.881 "data_offset": 0, 00:19:11.881 "data_size": 0 00:19:11.881 }, 00:19:11.881 { 00:19:11.881 "name": "BaseBdev4", 00:19:11.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.881 "is_configured": false, 00:19:11.881 "data_offset": 0, 00:19:11.881 "data_size": 0 00:19:11.881 } 00:19:11.881 ] 00:19:11.881 }' 00:19:11.881 22:27:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.881 22:27:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.531 22:27:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.790 [2024-07-14 22:27:11.725907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.790 [2024-07-14 22:27:11.725943] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:12.790 22:27:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:13.048 [2024-07-14 22:27:11.905965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.048 [2024-07-14 22:27:11.906015] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.048 [2024-07-14 22:27:11.906041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.048 [2024-07-14 22:27:11.906068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.048 [2024-07-14 22:27:11.906076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:13.048 [2024-07-14 22:27:11.906106] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:13.048 [2024-07-14 22:27:11.906129] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:13.048 [2024-07-14 22:27:11.906180] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:13.048 22:27:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:13.306 [2024-07-14 22:27:12.115229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.306 BaseBdev1 00:19:13.306 22:27:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:13.306 22:27:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:13.306 22:27:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:13.306 22:27:12 -- common/autotest_common.sh@889 -- # local i 00:19:13.306 22:27:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:13.306 22:27:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:13.306 22:27:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.564 22:27:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.564 [ 00:19:13.564 { 00:19:13.564 "name": "BaseBdev1", 00:19:13.564 "aliases": [ 00:19:13.564 "b1ac1435-2c89-410c-bd1b-1fa68595d2e7" 00:19:13.564 ], 00:19:13.564 "product_name": "Malloc disk", 00:19:13.564 "block_size": 512, 00:19:13.564 "num_blocks": 65536, 00:19:13.564 "uuid": "b1ac1435-2c89-410c-bd1b-1fa68595d2e7", 00:19:13.564 "assigned_rate_limits": { 00:19:13.564 "rw_ios_per_sec": 0, 00:19:13.564 "rw_mbytes_per_sec": 0, 00:19:13.564 "r_mbytes_per_sec": 0, 00:19:13.564 "w_mbytes_per_sec": 0 00:19:13.564 }, 00:19:13.564 "claimed": true, 00:19:13.564 "claim_type": "exclusive_write", 00:19:13.564 "zoned": false, 00:19:13.564 "supported_io_types": { 00:19:13.564 "read": true, 00:19:13.564 "write": true, 00:19:13.564 "unmap": true, 00:19:13.564 "write_zeroes": true, 00:19:13.564 "flush": true, 00:19:13.564 "reset": true, 00:19:13.564 "compare": false, 00:19:13.564 "compare_and_write": false, 00:19:13.564 "abort": true, 00:19:13.564 "nvme_admin": false, 00:19:13.564 "nvme_io": false 00:19:13.564 }, 00:19:13.564 "memory_domains": [ 00:19:13.564 { 00:19:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.564 "dma_device_type": 2 00:19:13.564 } 00:19:13.564 ], 00:19:13.564 "driver_specific": {} 00:19:13.564 } 00:19:13.564 ] 00:19:13.564 22:27:12 -- common/autotest_common.sh@895 -- # return 0 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.564 22:27:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.822 22:27:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.822 "name": "Existed_Raid", 00:19:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.822 "strip_size_kb": 64, 00:19:13.822 "state": "configuring", 00:19:13.822 "raid_level": "concat", 00:19:13.822 "superblock": false, 00:19:13.822 "num_base_bdevs": 4, 00:19:13.822 "num_base_bdevs_discovered": 1, 00:19:13.822 "num_base_bdevs_operational": 4, 00:19:13.822 "base_bdevs_list": [ 00:19:13.822 { 00:19:13.822 "name": "BaseBdev1", 00:19:13.822 "uuid": "b1ac1435-2c89-410c-bd1b-1fa68595d2e7", 00:19:13.822 "is_configured": true, 00:19:13.822 "data_offset": 0, 00:19:13.822 "data_size": 65536 00:19:13.822 }, 00:19:13.822 { 00:19:13.822 "name": "BaseBdev2", 00:19:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.822 "is_configured": false, 00:19:13.822 "data_offset": 0, 00:19:13.822 "data_size": 0 00:19:13.822 }, 00:19:13.822 { 00:19:13.822 "name": "BaseBdev3", 00:19:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.822 "is_configured": false, 00:19:13.822 "data_offset": 0, 00:19:13.822 "data_size": 0 00:19:13.822 }, 00:19:13.822 { 00:19:13.822 "name": "BaseBdev4", 00:19:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.822 "is_configured": false, 00:19:13.822 "data_offset": 0, 00:19:13.822 "data_size": 0 00:19:13.822 } 00:19:13.822 ] 00:19:13.822 }' 00:19:13.822 22:27:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.822 22:27:12 -- common/autotest_common.sh@10 -- # set +x 00:19:14.390 22:27:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.647 [2024-07-14 22:27:13.543496] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.647 [2024-07-14 22:27:13.543558] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:14.647 22:27:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:14.647 22:27:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.905 [2024-07-14 22:27:13.787587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.905 [2024-07-14 22:27:13.789230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.905 [2024-07-14 22:27:13.789304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.905 [2024-07-14 22:27:13.789331] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.905 [2024-07-14 22:27:13.789353] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.905 [2024-07-14 22:27:13.789361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.905 [2024-07-14 22:27:13.789375] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.905 22:27:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.162 22:27:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.162 "name": "Existed_Raid", 00:19:15.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.162 "strip_size_kb": 64, 00:19:15.162 "state": "configuring", 00:19:15.162 "raid_level": "concat", 00:19:15.162 "superblock": false, 00:19:15.163 "num_base_bdevs": 4, 00:19:15.163 "num_base_bdevs_discovered": 1, 00:19:15.163 "num_base_bdevs_operational": 4, 00:19:15.163 "base_bdevs_list": [ 00:19:15.163 { 00:19:15.163 "name": "BaseBdev1", 00:19:15.163 "uuid": "b1ac1435-2c89-410c-bd1b-1fa68595d2e7", 00:19:15.163 "is_configured": true, 00:19:15.163 "data_offset": 0, 00:19:15.163 "data_size": 65536 00:19:15.163 }, 00:19:15.163 { 00:19:15.163 "name": "BaseBdev2", 00:19:15.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.163 "is_configured": false, 00:19:15.163 "data_offset": 0, 00:19:15.163 "data_size": 0 00:19:15.163 }, 00:19:15.163 { 00:19:15.163 "name": "BaseBdev3", 00:19:15.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.163 "is_configured": false, 00:19:15.163 "data_offset": 0, 00:19:15.163 "data_size": 0 00:19:15.163 }, 00:19:15.163 { 00:19:15.163 "name": "BaseBdev4", 00:19:15.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.163 "is_configured": false, 00:19:15.163 "data_offset": 0, 00:19:15.163 "data_size": 0 00:19:15.163 } 00:19:15.163 ] 00:19:15.163 }' 00:19:15.163 22:27:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.163 22:27:13 -- common/autotest_common.sh@10 -- # set +x 00:19:15.728 22:27:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:15.986 [2024-07-14 22:27:14.830660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.986 BaseBdev2 00:19:15.986 22:27:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:15.986 22:27:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:15.986 22:27:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:15.986 22:27:14 -- common/autotest_common.sh@889 -- # local i 00:19:15.986 22:27:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:15.986 22:27:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:15.986 22:27:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.243 22:27:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:16.243 [ 00:19:16.243 { 00:19:16.243 "name": "BaseBdev2", 00:19:16.243 "aliases": [ 00:19:16.243 "cc7b60c8-6b97-4225-82a9-086bc0f7b7d0" 00:19:16.243 ], 00:19:16.243 "product_name": "Malloc disk", 00:19:16.243 "block_size": 512, 00:19:16.243 "num_blocks": 65536, 00:19:16.243 "uuid": "cc7b60c8-6b97-4225-82a9-086bc0f7b7d0", 00:19:16.243 "assigned_rate_limits": { 00:19:16.243 "rw_ios_per_sec": 0, 00:19:16.243 "rw_mbytes_per_sec": 0, 00:19:16.243 "r_mbytes_per_sec": 0, 00:19:16.243 "w_mbytes_per_sec": 0 00:19:16.243 }, 00:19:16.243 "claimed": true, 00:19:16.243 "claim_type": "exclusive_write", 00:19:16.243 "zoned": false, 00:19:16.243 "supported_io_types": { 00:19:16.243 "read": true, 00:19:16.243 "write": true, 00:19:16.243 "unmap": true, 00:19:16.243 "write_zeroes": true, 00:19:16.243 "flush": true, 00:19:16.243 "reset": true, 00:19:16.243 "compare": false, 00:19:16.243 "compare_and_write": false, 00:19:16.243 "abort": true, 00:19:16.243 "nvme_admin": false, 00:19:16.243 "nvme_io": false 00:19:16.243 }, 00:19:16.243 "memory_domains": [ 00:19:16.243 { 00:19:16.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.243 "dma_device_type": 2 00:19:16.243 } 00:19:16.243 ], 00:19:16.243 "driver_specific": {} 00:19:16.243 } 00:19:16.243 ] 00:19:16.501 22:27:15 -- common/autotest_common.sh@895 -- # return 0 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.501 "name": "Existed_Raid", 00:19:16.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.501 "strip_size_kb": 64, 00:19:16.501 "state": "configuring", 00:19:16.501 "raid_level": "concat", 00:19:16.501 "superblock": false, 00:19:16.501 "num_base_bdevs": 4, 00:19:16.501 "num_base_bdevs_discovered": 2, 00:19:16.501 "num_base_bdevs_operational": 4, 00:19:16.501 "base_bdevs_list": [ 00:19:16.501 { 00:19:16.501 "name": "BaseBdev1", 00:19:16.501 "uuid": "b1ac1435-2c89-410c-bd1b-1fa68595d2e7", 00:19:16.501 "is_configured": true, 00:19:16.501 "data_offset": 0, 00:19:16.501 "data_size": 65536 00:19:16.501 }, 00:19:16.501 { 00:19:16.501 "name": "BaseBdev2", 00:19:16.501 "uuid": "cc7b60c8-6b97-4225-82a9-086bc0f7b7d0", 00:19:16.501 "is_configured": true, 00:19:16.501 "data_offset": 0, 00:19:16.501 "data_size": 65536 00:19:16.501 }, 00:19:16.501 { 00:19:16.501 "name": "BaseBdev3", 00:19:16.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.501 "is_configured": false, 00:19:16.501 "data_offset": 0, 00:19:16.501 "data_size": 0 00:19:16.501 }, 00:19:16.501 { 00:19:16.501 "name": "BaseBdev4", 00:19:16.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.501 "is_configured": false, 00:19:16.501 "data_offset": 0, 00:19:16.501 "data_size": 0 00:19:16.501 } 00:19:16.501 ] 00:19:16.501 }' 00:19:16.501 22:27:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.501 22:27:15 -- common/autotest_common.sh@10 -- # set +x 00:19:17.067 22:27:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:17.324 [2024-07-14 22:27:16.278709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.324 BaseBdev3 00:19:17.324 22:27:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:17.324 22:27:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:17.324 22:27:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.324 22:27:16 -- common/autotest_common.sh@889 -- # local i 00:19:17.324 22:27:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.324 22:27:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.324 22:27:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.582 22:27:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:17.582 [ 00:19:17.582 { 00:19:17.582 "name": "BaseBdev3", 00:19:17.582 "aliases": [ 00:19:17.582 "a05848ab-a974-4712-9325-3522e0279b88" 00:19:17.582 ], 00:19:17.582 "product_name": "Malloc disk", 00:19:17.582 "block_size": 512, 00:19:17.582 "num_blocks": 65536, 00:19:17.582 "uuid": "a05848ab-a974-4712-9325-3522e0279b88", 00:19:17.582 "assigned_rate_limits": { 00:19:17.582 "rw_ios_per_sec": 0, 00:19:17.582 "rw_mbytes_per_sec": 0, 00:19:17.582 "r_mbytes_per_sec": 0, 00:19:17.582 "w_mbytes_per_sec": 0 00:19:17.582 }, 00:19:17.582 "claimed": true, 00:19:17.582 "claim_type": "exclusive_write", 00:19:17.582 "zoned": false, 00:19:17.582 "supported_io_types": { 00:19:17.582 "read": true, 00:19:17.582 "write": true, 00:19:17.582 "unmap": true, 00:19:17.582 "write_zeroes": true, 00:19:17.582 "flush": true, 00:19:17.582 "reset": true, 00:19:17.582 "compare": false, 00:19:17.582 "compare_and_write": false, 00:19:17.582 "abort": true, 00:19:17.582 "nvme_admin": false, 00:19:17.582 "nvme_io": false 00:19:17.582 }, 00:19:17.582 "memory_domains": [ 00:19:17.582 { 00:19:17.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.582 "dma_device_type": 2 00:19:17.582 } 00:19:17.582 ], 00:19:17.582 "driver_specific": {} 00:19:17.582 } 00:19:17.582 ] 00:19:17.840 22:27:16 -- common/autotest_common.sh@895 -- # return 0 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.840 "name": "Existed_Raid", 00:19:17.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.840 "strip_size_kb": 64, 00:19:17.840 "state": "configuring", 00:19:17.840 "raid_level": "concat", 00:19:17.840 "superblock": false, 00:19:17.840 "num_base_bdevs": 4, 00:19:17.840 "num_base_bdevs_discovered": 3, 00:19:17.840 "num_base_bdevs_operational": 4, 00:19:17.840 "base_bdevs_list": [ 00:19:17.840 { 00:19:17.840 "name": "BaseBdev1", 00:19:17.840 "uuid": "b1ac1435-2c89-410c-bd1b-1fa68595d2e7", 00:19:17.840 "is_configured": true, 00:19:17.840 "data_offset": 0, 00:19:17.840 "data_size": 65536 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "name": "BaseBdev2", 00:19:17.840 "uuid": "cc7b60c8-6b97-4225-82a9-086bc0f7b7d0", 00:19:17.840 "is_configured": true, 00:19:17.840 "data_offset": 0, 00:19:17.840 "data_size": 65536 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "name": "BaseBdev3", 00:19:17.840 "uuid": "a05848ab-a974-4712-9325-3522e0279b88", 00:19:17.840 "is_configured": true, 00:19:17.840 "data_offset": 0, 00:19:17.840 "data_size": 65536 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "name": "BaseBdev4", 00:19:17.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.840 "is_configured": false, 00:19:17.840 "data_offset": 0, 00:19:17.840 "data_size": 0 00:19:17.840 } 00:19:17.840 ] 00:19:17.840 }' 00:19:17.840 22:27:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.840 22:27:16 -- common/autotest_common.sh@10 -- # set +x 00:19:18.773 22:27:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:18.773 [2024-07-14 22:27:17.704045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:18.773 [2024-07-14 22:27:17.704150] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:18.773 [2024-07-14 22:27:17.704160] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:18.773 [2024-07-14 22:27:17.704288] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:18.773 [2024-07-14 22:27:17.704662] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:18.773 [2024-07-14 22:27:17.704685] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:18.773 [2024-07-14 22:27:17.705013] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.773 BaseBdev4 00:19:18.773 22:27:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:18.773 22:27:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:18.773 22:27:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:18.773 22:27:17 -- common/autotest_common.sh@889 -- # local i 00:19:18.773 22:27:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:18.773 22:27:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:18.773 22:27:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.030 22:27:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:19.030 [ 00:19:19.030 { 00:19:19.030 "name": "BaseBdev4", 00:19:19.030 "aliases": [ 00:19:19.030 "8e008e36-84e7-4de9-b08f-e444f0c461db" 00:19:19.030 ], 00:19:19.030 "product_name": "Malloc disk", 00:19:19.030 "block_size": 512, 00:19:19.030 "num_blocks": 65536, 00:19:19.030 "uuid": "8e008e36-84e7-4de9-b08f-e444f0c461db", 00:19:19.030 "assigned_rate_limits": { 00:19:19.030 "rw_ios_per_sec": 0, 00:19:19.030 "rw_mbytes_per_sec": 0, 00:19:19.030 "r_mbytes_per_sec": 0, 00:19:19.030 "w_mbytes_per_sec": 0 00:19:19.030 }, 00:19:19.030 "claimed": true, 00:19:19.030 "claim_type": "exclusive_write", 00:19:19.030 "zoned": false, 00:19:19.030 "supported_io_types": { 00:19:19.030 "read": true, 00:19:19.030 "write": true, 00:19:19.030 "unmap": true, 00:19:19.030 "write_zeroes": true, 00:19:19.030 "flush": true, 00:19:19.030 "reset": true, 00:19:19.030 "compare": false, 00:19:19.030 "compare_and_write": false, 00:19:19.030 "abort": true, 00:19:19.030 "nvme_admin": false, 00:19:19.030 "nvme_io": false 00:19:19.030 }, 00:19:19.030 "memory_domains": [ 00:19:19.030 { 00:19:19.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.030 "dma_device_type": 2 00:19:19.030 } 00:19:19.030 ], 00:19:19.030 "driver_specific": {} 00:19:19.030 } 00:19:19.030 ] 00:19:19.030 22:27:18 -- common/autotest_common.sh@895 -- # return 0 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.030 22:27:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.287 22:27:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.287 "name": "Existed_Raid", 00:19:19.287 "uuid": "fe4673a4-fd44-4f43-bdda-8b2f87ca3223", 00:19:19.287 "strip_size_kb": 64, 00:19:19.287 "state": "online", 00:19:19.287 "raid_level": "concat", 00:19:19.287 "superblock": false, 00:19:19.287 "num_base_bdevs": 4, 00:19:19.287 "num_base_bdevs_discovered": 4, 00:19:19.287 "num_base_bdevs_operational": 4, 00:19:19.287 "base_bdevs_list": [ 00:19:19.287 { 00:19:19.287 "name": "BaseBdev1", 00:19:19.287 "uuid": "b1ac1435-2c89-410c-bd1b-1fa68595d2e7", 00:19:19.287 "is_configured": true, 00:19:19.287 "data_offset": 0, 00:19:19.287 "data_size": 65536 00:19:19.287 }, 00:19:19.287 { 00:19:19.287 "name": "BaseBdev2", 00:19:19.287 "uuid": "cc7b60c8-6b97-4225-82a9-086bc0f7b7d0", 00:19:19.287 "is_configured": true, 00:19:19.287 "data_offset": 0, 00:19:19.287 "data_size": 65536 00:19:19.287 }, 00:19:19.287 { 00:19:19.287 "name": "BaseBdev3", 00:19:19.287 "uuid": "a05848ab-a974-4712-9325-3522e0279b88", 00:19:19.287 "is_configured": true, 00:19:19.287 "data_offset": 0, 00:19:19.287 "data_size": 65536 00:19:19.287 }, 00:19:19.287 { 00:19:19.287 "name": "BaseBdev4", 00:19:19.287 "uuid": "8e008e36-84e7-4de9-b08f-e444f0c461db", 00:19:19.287 "is_configured": true, 00:19:19.287 "data_offset": 0, 00:19:19.287 "data_size": 65536 00:19:19.287 } 00:19:19.287 ] 00:19:19.287 }' 00:19:19.287 22:27:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.287 22:27:18 -- common/autotest_common.sh@10 -- # set +x 00:19:20.219 22:27:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:20.219 [2024-07-14 22:27:19.188520] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.219 [2024-07-14 22:27:19.188546] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.219 [2024-07-14 22:27:19.188602] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.219 22:27:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.477 22:27:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.477 "name": "Existed_Raid", 00:19:20.477 "uuid": "fe4673a4-fd44-4f43-bdda-8b2f87ca3223", 00:19:20.477 "strip_size_kb": 64, 00:19:20.477 "state": "offline", 00:19:20.477 "raid_level": "concat", 00:19:20.477 "superblock": false, 00:19:20.477 "num_base_bdevs": 4, 00:19:20.477 "num_base_bdevs_discovered": 3, 00:19:20.477 "num_base_bdevs_operational": 3, 00:19:20.477 "base_bdevs_list": [ 00:19:20.477 { 00:19:20.477 "name": null, 00:19:20.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.477 "is_configured": false, 00:19:20.477 "data_offset": 0, 00:19:20.477 "data_size": 65536 00:19:20.477 }, 00:19:20.477 { 00:19:20.477 "name": "BaseBdev2", 00:19:20.477 "uuid": "cc7b60c8-6b97-4225-82a9-086bc0f7b7d0", 00:19:20.477 "is_configured": true, 00:19:20.477 "data_offset": 0, 00:19:20.477 "data_size": 65536 00:19:20.477 }, 00:19:20.477 { 00:19:20.477 "name": "BaseBdev3", 00:19:20.477 "uuid": "a05848ab-a974-4712-9325-3522e0279b88", 00:19:20.477 "is_configured": true, 00:19:20.477 "data_offset": 0, 00:19:20.477 "data_size": 65536 00:19:20.477 }, 00:19:20.477 { 00:19:20.477 "name": "BaseBdev4", 00:19:20.477 "uuid": "8e008e36-84e7-4de9-b08f-e444f0c461db", 00:19:20.477 "is_configured": true, 00:19:20.477 "data_offset": 0, 00:19:20.477 "data_size": 65536 00:19:20.477 } 00:19:20.477 ] 00:19:20.477 }' 00:19:20.477 22:27:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.478 22:27:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.044 22:27:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:21.044 22:27:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:21.301 22:27:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.301 22:27:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:21.301 22:27:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:21.301 22:27:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.301 22:27:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:21.560 [2024-07-14 22:27:20.456119] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:21.560 22:27:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:21.560 22:27:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:21.560 22:27:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.560 22:27:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:21.818 22:27:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:21.818 22:27:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.818 22:27:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:22.076 [2024-07-14 22:27:20.971468] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:22.076 22:27:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:22.076 22:27:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.076 22:27:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.076 22:27:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:22.333 22:27:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:22.333 22:27:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.333 22:27:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:22.333 [2024-07-14 22:27:21.391291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:22.333 [2024-07-14 22:27:21.391345] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:22.591 22:27:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:22.591 22:27:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.591 22:27:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.591 22:27:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:22.848 22:27:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:22.848 22:27:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:22.848 22:27:21 -- bdev/bdev_raid.sh@287 -- # killprocess 122654 00:19:22.848 22:27:21 -- common/autotest_common.sh@926 -- # '[' -z 122654 ']' 00:19:22.848 22:27:21 -- common/autotest_common.sh@930 -- # kill -0 122654 00:19:22.848 22:27:21 -- common/autotest_common.sh@931 -- # uname 00:19:22.848 22:27:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:22.848 22:27:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122654 00:19:22.848 killing process with pid 122654 00:19:22.848 22:27:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:22.848 22:27:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:22.848 22:27:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122654' 00:19:22.848 22:27:21 -- common/autotest_common.sh@945 -- # kill 122654 00:19:22.848 22:27:21 -- common/autotest_common.sh@950 -- # wait 122654 00:19:22.848 [2024-07-14 22:27:21.700458] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.848 [2024-07-14 22:27:21.700561] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.781 ************************************ 00:19:23.781 END TEST raid_state_function_test 00:19:23.781 ************************************ 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:23.781 00:19:23.781 real 0m13.206s 00:19:23.781 user 0m23.862s 00:19:23.781 sys 0m1.344s 00:19:23.781 22:27:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.781 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:23.781 22:27:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:23.781 22:27:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:23.781 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:19:23.781 ************************************ 00:19:23.781 START TEST raid_state_function_test_sb 00:19:23.781 ************************************ 00:19:23.781 22:27:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=123098 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123098' 00:19:23.781 Process raid pid: 123098 00:19:23.781 22:27:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123098 /var/tmp/spdk-raid.sock 00:19:23.781 22:27:22 -- common/autotest_common.sh@819 -- # '[' -z 123098 ']' 00:19:23.781 22:27:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:23.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:23.781 22:27:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:23.781 22:27:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:23.781 22:27:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:23.781 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:19:23.781 [2024-07-14 22:27:22.722534] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:23.781 [2024-07-14 22:27:22.722658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.038 [2024-07-14 22:27:22.874416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.038 [2024-07-14 22:27:23.031044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.298 [2024-07-14 22:27:23.195725] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.862 22:27:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:24.862 22:27:23 -- common/autotest_common.sh@852 -- # return 0 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:24.862 [2024-07-14 22:27:23.896318] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.862 [2024-07-14 22:27:23.896382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.862 [2024-07-14 22:27:23.896404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.862 [2024-07-14 22:27:23.896423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.862 [2024-07-14 22:27:23.896429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.862 [2024-07-14 22:27:23.896462] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.862 [2024-07-14 22:27:23.896469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:24.862 [2024-07-14 22:27:23.896489] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.862 22:27:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.120 22:27:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.120 "name": "Existed_Raid", 00:19:25.120 "uuid": "94c9d799-715f-4372-a32d-58a30c2c75da", 00:19:25.120 "strip_size_kb": 64, 00:19:25.120 "state": "configuring", 00:19:25.120 "raid_level": "concat", 00:19:25.120 "superblock": true, 00:19:25.120 "num_base_bdevs": 4, 00:19:25.120 "num_base_bdevs_discovered": 0, 00:19:25.120 "num_base_bdevs_operational": 4, 00:19:25.120 "base_bdevs_list": [ 00:19:25.120 { 00:19:25.120 "name": "BaseBdev1", 00:19:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.120 "is_configured": false, 00:19:25.120 "data_offset": 0, 00:19:25.120 "data_size": 0 00:19:25.120 }, 00:19:25.120 { 00:19:25.120 "name": "BaseBdev2", 00:19:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.120 "is_configured": false, 00:19:25.120 "data_offset": 0, 00:19:25.120 "data_size": 0 00:19:25.120 }, 00:19:25.120 { 00:19:25.120 "name": "BaseBdev3", 00:19:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.120 "is_configured": false, 00:19:25.120 "data_offset": 0, 00:19:25.120 "data_size": 0 00:19:25.120 }, 00:19:25.120 { 00:19:25.120 "name": "BaseBdev4", 00:19:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.120 "is_configured": false, 00:19:25.120 "data_offset": 0, 00:19:25.120 "data_size": 0 00:19:25.120 } 00:19:25.120 ] 00:19:25.120 }' 00:19:25.120 22:27:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.120 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:19:25.711 22:27:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.969 [2024-07-14 22:27:24.944448] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.969 [2024-07-14 22:27:24.944480] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:25.969 22:27:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:26.226 [2024-07-14 22:27:25.124531] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.226 [2024-07-14 22:27:25.124575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.226 [2024-07-14 22:27:25.124584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.226 [2024-07-14 22:27:25.124609] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.226 [2024-07-14 22:27:25.124616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.226 [2024-07-14 22:27:25.124643] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.226 [2024-07-14 22:27:25.124650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.226 [2024-07-14 22:27:25.124667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.226 22:27:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:26.484 [2024-07-14 22:27:25.401997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.484 BaseBdev1 00:19:26.484 22:27:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:26.484 22:27:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:26.484 22:27:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:26.484 22:27:25 -- common/autotest_common.sh@889 -- # local i 00:19:26.484 22:27:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:26.484 22:27:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:26.484 22:27:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:26.743 22:27:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:26.743 [ 00:19:26.743 { 00:19:26.743 "name": "BaseBdev1", 00:19:26.743 "aliases": [ 00:19:26.743 "66879547-b7f6-4910-8419-cde6a561bbbf" 00:19:26.743 ], 00:19:26.743 "product_name": "Malloc disk", 00:19:26.743 "block_size": 512, 00:19:26.743 "num_blocks": 65536, 00:19:26.743 "uuid": "66879547-b7f6-4910-8419-cde6a561bbbf", 00:19:26.744 "assigned_rate_limits": { 00:19:26.744 "rw_ios_per_sec": 0, 00:19:26.744 "rw_mbytes_per_sec": 0, 00:19:26.744 "r_mbytes_per_sec": 0, 00:19:26.744 "w_mbytes_per_sec": 0 00:19:26.744 }, 00:19:26.744 "claimed": true, 00:19:26.744 "claim_type": "exclusive_write", 00:19:26.744 "zoned": false, 00:19:26.744 "supported_io_types": { 00:19:26.744 "read": true, 00:19:26.744 "write": true, 00:19:26.744 "unmap": true, 00:19:26.744 "write_zeroes": true, 00:19:26.744 "flush": true, 00:19:26.744 "reset": true, 00:19:26.744 "compare": false, 00:19:26.744 "compare_and_write": false, 00:19:26.744 "abort": true, 00:19:26.744 "nvme_admin": false, 00:19:26.744 "nvme_io": false 00:19:26.744 }, 00:19:26.744 "memory_domains": [ 00:19:26.744 { 00:19:26.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.744 "dma_device_type": 2 00:19:26.744 } 00:19:26.744 ], 00:19:26.744 "driver_specific": {} 00:19:26.744 } 00:19:26.744 ] 00:19:27.002 22:27:25 -- common/autotest_common.sh@895 -- # return 0 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.002 22:27:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.002 22:27:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.002 "name": "Existed_Raid", 00:19:27.002 "uuid": "f3ba1665-eb6a-4114-b1e9-3181d120d928", 00:19:27.002 "strip_size_kb": 64, 00:19:27.002 "state": "configuring", 00:19:27.002 "raid_level": "concat", 00:19:27.002 "superblock": true, 00:19:27.002 "num_base_bdevs": 4, 00:19:27.002 "num_base_bdevs_discovered": 1, 00:19:27.002 "num_base_bdevs_operational": 4, 00:19:27.002 "base_bdevs_list": [ 00:19:27.002 { 00:19:27.002 "name": "BaseBdev1", 00:19:27.002 "uuid": "66879547-b7f6-4910-8419-cde6a561bbbf", 00:19:27.002 "is_configured": true, 00:19:27.002 "data_offset": 2048, 00:19:27.002 "data_size": 63488 00:19:27.002 }, 00:19:27.002 { 00:19:27.002 "name": "BaseBdev2", 00:19:27.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.002 "is_configured": false, 00:19:27.002 "data_offset": 0, 00:19:27.002 "data_size": 0 00:19:27.002 }, 00:19:27.002 { 00:19:27.002 "name": "BaseBdev3", 00:19:27.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.002 "is_configured": false, 00:19:27.002 "data_offset": 0, 00:19:27.002 "data_size": 0 00:19:27.002 }, 00:19:27.002 { 00:19:27.002 "name": "BaseBdev4", 00:19:27.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.002 "is_configured": false, 00:19:27.002 "data_offset": 0, 00:19:27.002 "data_size": 0 00:19:27.002 } 00:19:27.002 ] 00:19:27.002 }' 00:19:27.002 22:27:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.002 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:19:27.569 22:27:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:27.828 [2024-07-14 22:27:26.778289] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.828 [2024-07-14 22:27:26.778328] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:27.828 22:27:26 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:27.828 22:27:26 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:28.086 22:27:27 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:28.345 BaseBdev1 00:19:28.345 22:27:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:28.345 22:27:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:28.345 22:27:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:28.345 22:27:27 -- common/autotest_common.sh@889 -- # local i 00:19:28.345 22:27:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:28.345 22:27:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:28.345 22:27:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.604 22:27:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.863 [ 00:19:28.863 { 00:19:28.863 "name": "BaseBdev1", 00:19:28.863 "aliases": [ 00:19:28.863 "dbafa84b-defc-424f-a0c7-f6373823e8bb" 00:19:28.863 ], 00:19:28.863 "product_name": "Malloc disk", 00:19:28.863 "block_size": 512, 00:19:28.863 "num_blocks": 65536, 00:19:28.863 "uuid": "dbafa84b-defc-424f-a0c7-f6373823e8bb", 00:19:28.863 "assigned_rate_limits": { 00:19:28.863 "rw_ios_per_sec": 0, 00:19:28.863 "rw_mbytes_per_sec": 0, 00:19:28.863 "r_mbytes_per_sec": 0, 00:19:28.863 "w_mbytes_per_sec": 0 00:19:28.863 }, 00:19:28.863 "claimed": false, 00:19:28.863 "zoned": false, 00:19:28.863 "supported_io_types": { 00:19:28.863 "read": true, 00:19:28.863 "write": true, 00:19:28.863 "unmap": true, 00:19:28.863 "write_zeroes": true, 00:19:28.863 "flush": true, 00:19:28.863 "reset": true, 00:19:28.863 "compare": false, 00:19:28.863 "compare_and_write": false, 00:19:28.863 "abort": true, 00:19:28.863 "nvme_admin": false, 00:19:28.863 "nvme_io": false 00:19:28.863 }, 00:19:28.863 "memory_domains": [ 00:19:28.863 { 00:19:28.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.863 "dma_device_type": 2 00:19:28.863 } 00:19:28.863 ], 00:19:28.863 "driver_specific": {} 00:19:28.863 } 00:19:28.863 ] 00:19:28.863 22:27:27 -- common/autotest_common.sh@895 -- # return 0 00:19:28.863 22:27:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:29.121 [2024-07-14 22:27:27.931730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.121 [2024-07-14 22:27:27.933819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.121 [2024-07-14 22:27:27.933887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.122 [2024-07-14 22:27:27.933899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.122 [2024-07-14 22:27:27.933939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.122 [2024-07-14 22:27:27.933948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.122 [2024-07-14 22:27:27.933963] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.122 22:27:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.122 22:27:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.122 "name": "Existed_Raid", 00:19:29.122 "uuid": "cf0fc776-3819-4b77-833c-72dedff4feec", 00:19:29.122 "strip_size_kb": 64, 00:19:29.122 "state": "configuring", 00:19:29.122 "raid_level": "concat", 00:19:29.122 "superblock": true, 00:19:29.122 "num_base_bdevs": 4, 00:19:29.122 "num_base_bdevs_discovered": 1, 00:19:29.122 "num_base_bdevs_operational": 4, 00:19:29.122 "base_bdevs_list": [ 00:19:29.122 { 00:19:29.122 "name": "BaseBdev1", 00:19:29.122 "uuid": "dbafa84b-defc-424f-a0c7-f6373823e8bb", 00:19:29.122 "is_configured": true, 00:19:29.122 "data_offset": 2048, 00:19:29.122 "data_size": 63488 00:19:29.122 }, 00:19:29.122 { 00:19:29.122 "name": "BaseBdev2", 00:19:29.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.122 "is_configured": false, 00:19:29.122 "data_offset": 0, 00:19:29.122 "data_size": 0 00:19:29.122 }, 00:19:29.122 { 00:19:29.122 "name": "BaseBdev3", 00:19:29.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.122 "is_configured": false, 00:19:29.122 "data_offset": 0, 00:19:29.122 "data_size": 0 00:19:29.122 }, 00:19:29.122 { 00:19:29.122 "name": "BaseBdev4", 00:19:29.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.122 "is_configured": false, 00:19:29.122 "data_offset": 0, 00:19:29.122 "data_size": 0 00:19:29.122 } 00:19:29.122 ] 00:19:29.122 }' 00:19:29.122 22:27:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.122 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.056 22:27:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.056 [2024-07-14 22:27:28.988444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.056 BaseBdev2 00:19:30.056 22:27:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:30.056 22:27:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:30.056 22:27:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:30.056 22:27:28 -- common/autotest_common.sh@889 -- # local i 00:19:30.056 22:27:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:30.056 22:27:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:30.056 22:27:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.314 22:27:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.572 [ 00:19:30.572 { 00:19:30.572 "name": "BaseBdev2", 00:19:30.572 "aliases": [ 00:19:30.572 "f11b8ff1-5870-4dd4-bfb4-6411777504b5" 00:19:30.572 ], 00:19:30.572 "product_name": "Malloc disk", 00:19:30.572 "block_size": 512, 00:19:30.572 "num_blocks": 65536, 00:19:30.572 "uuid": "f11b8ff1-5870-4dd4-bfb4-6411777504b5", 00:19:30.572 "assigned_rate_limits": { 00:19:30.572 "rw_ios_per_sec": 0, 00:19:30.572 "rw_mbytes_per_sec": 0, 00:19:30.572 "r_mbytes_per_sec": 0, 00:19:30.572 "w_mbytes_per_sec": 0 00:19:30.572 }, 00:19:30.572 "claimed": true, 00:19:30.572 "claim_type": "exclusive_write", 00:19:30.572 "zoned": false, 00:19:30.572 "supported_io_types": { 00:19:30.572 "read": true, 00:19:30.572 "write": true, 00:19:30.572 "unmap": true, 00:19:30.572 "write_zeroes": true, 00:19:30.572 "flush": true, 00:19:30.572 "reset": true, 00:19:30.572 "compare": false, 00:19:30.572 "compare_and_write": false, 00:19:30.572 "abort": true, 00:19:30.572 "nvme_admin": false, 00:19:30.572 "nvme_io": false 00:19:30.572 }, 00:19:30.572 "memory_domains": [ 00:19:30.572 { 00:19:30.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.572 "dma_device_type": 2 00:19:30.572 } 00:19:30.572 ], 00:19:30.572 "driver_specific": {} 00:19:30.572 } 00:19:30.572 ] 00:19:30.572 22:27:29 -- common/autotest_common.sh@895 -- # return 0 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.572 22:27:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.831 22:27:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.831 "name": "Existed_Raid", 00:19:30.831 "uuid": "cf0fc776-3819-4b77-833c-72dedff4feec", 00:19:30.831 "strip_size_kb": 64, 00:19:30.831 "state": "configuring", 00:19:30.831 "raid_level": "concat", 00:19:30.831 "superblock": true, 00:19:30.831 "num_base_bdevs": 4, 00:19:30.831 "num_base_bdevs_discovered": 2, 00:19:30.831 "num_base_bdevs_operational": 4, 00:19:30.831 "base_bdevs_list": [ 00:19:30.831 { 00:19:30.831 "name": "BaseBdev1", 00:19:30.831 "uuid": "dbafa84b-defc-424f-a0c7-f6373823e8bb", 00:19:30.831 "is_configured": true, 00:19:30.831 "data_offset": 2048, 00:19:30.831 "data_size": 63488 00:19:30.831 }, 00:19:30.831 { 00:19:30.831 "name": "BaseBdev2", 00:19:30.831 "uuid": "f11b8ff1-5870-4dd4-bfb4-6411777504b5", 00:19:30.831 "is_configured": true, 00:19:30.831 "data_offset": 2048, 00:19:30.831 "data_size": 63488 00:19:30.831 }, 00:19:30.831 { 00:19:30.831 "name": "BaseBdev3", 00:19:30.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.831 "is_configured": false, 00:19:30.831 "data_offset": 0, 00:19:30.831 "data_size": 0 00:19:30.831 }, 00:19:30.831 { 00:19:30.831 "name": "BaseBdev4", 00:19:30.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.831 "is_configured": false, 00:19:30.831 "data_offset": 0, 00:19:30.831 "data_size": 0 00:19:30.831 } 00:19:30.831 ] 00:19:30.831 }' 00:19:30.831 22:27:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.831 22:27:29 -- common/autotest_common.sh@10 -- # set +x 00:19:31.398 22:27:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.657 [2024-07-14 22:27:30.488778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.657 BaseBdev3 00:19:31.657 22:27:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:31.657 22:27:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:31.657 22:27:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:31.657 22:27:30 -- common/autotest_common.sh@889 -- # local i 00:19:31.657 22:27:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:31.657 22:27:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:31.657 22:27:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.657 22:27:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.916 [ 00:19:31.916 { 00:19:31.916 "name": "BaseBdev3", 00:19:31.916 "aliases": [ 00:19:31.916 "84a9f741-e97b-4451-b1c7-cee6a51f180c" 00:19:31.916 ], 00:19:31.916 "product_name": "Malloc disk", 00:19:31.916 "block_size": 512, 00:19:31.916 "num_blocks": 65536, 00:19:31.916 "uuid": "84a9f741-e97b-4451-b1c7-cee6a51f180c", 00:19:31.916 "assigned_rate_limits": { 00:19:31.916 "rw_ios_per_sec": 0, 00:19:31.916 "rw_mbytes_per_sec": 0, 00:19:31.916 "r_mbytes_per_sec": 0, 00:19:31.916 "w_mbytes_per_sec": 0 00:19:31.916 }, 00:19:31.916 "claimed": true, 00:19:31.916 "claim_type": "exclusive_write", 00:19:31.916 "zoned": false, 00:19:31.916 "supported_io_types": { 00:19:31.916 "read": true, 00:19:31.916 "write": true, 00:19:31.916 "unmap": true, 00:19:31.916 "write_zeroes": true, 00:19:31.916 "flush": true, 00:19:31.916 "reset": true, 00:19:31.916 "compare": false, 00:19:31.916 "compare_and_write": false, 00:19:31.916 "abort": true, 00:19:31.916 "nvme_admin": false, 00:19:31.916 "nvme_io": false 00:19:31.916 }, 00:19:31.916 "memory_domains": [ 00:19:31.916 { 00:19:31.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.916 "dma_device_type": 2 00:19:31.916 } 00:19:31.916 ], 00:19:31.916 "driver_specific": {} 00:19:31.916 } 00:19:31.916 ] 00:19:31.916 22:27:30 -- common/autotest_common.sh@895 -- # return 0 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.916 22:27:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.175 22:27:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.175 "name": "Existed_Raid", 00:19:32.175 "uuid": "cf0fc776-3819-4b77-833c-72dedff4feec", 00:19:32.175 "strip_size_kb": 64, 00:19:32.175 "state": "configuring", 00:19:32.175 "raid_level": "concat", 00:19:32.175 "superblock": true, 00:19:32.175 "num_base_bdevs": 4, 00:19:32.175 "num_base_bdevs_discovered": 3, 00:19:32.175 "num_base_bdevs_operational": 4, 00:19:32.175 "base_bdevs_list": [ 00:19:32.175 { 00:19:32.175 "name": "BaseBdev1", 00:19:32.175 "uuid": "dbafa84b-defc-424f-a0c7-f6373823e8bb", 00:19:32.175 "is_configured": true, 00:19:32.175 "data_offset": 2048, 00:19:32.175 "data_size": 63488 00:19:32.175 }, 00:19:32.175 { 00:19:32.175 "name": "BaseBdev2", 00:19:32.175 "uuid": "f11b8ff1-5870-4dd4-bfb4-6411777504b5", 00:19:32.175 "is_configured": true, 00:19:32.175 "data_offset": 2048, 00:19:32.175 "data_size": 63488 00:19:32.175 }, 00:19:32.175 { 00:19:32.175 "name": "BaseBdev3", 00:19:32.175 "uuid": "84a9f741-e97b-4451-b1c7-cee6a51f180c", 00:19:32.175 "is_configured": true, 00:19:32.175 "data_offset": 2048, 00:19:32.175 "data_size": 63488 00:19:32.175 }, 00:19:32.175 { 00:19:32.175 "name": "BaseBdev4", 00:19:32.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.175 "is_configured": false, 00:19:32.175 "data_offset": 0, 00:19:32.175 "data_size": 0 00:19:32.175 } 00:19:32.175 ] 00:19:32.175 }' 00:19:32.175 22:27:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.175 22:27:31 -- common/autotest_common.sh@10 -- # set +x 00:19:32.741 22:27:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:33.000 [2024-07-14 22:27:31.965189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:33.000 BaseBdev4 00:19:33.000 [2024-07-14 22:27:31.965429] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:33.000 [2024-07-14 22:27:31.965444] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:33.000 [2024-07-14 22:27:31.965613] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:33.000 [2024-07-14 22:27:31.965970] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:33.000 [2024-07-14 22:27:31.965994] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:33.000 [2024-07-14 22:27:31.966159] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.000 22:27:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:33.000 22:27:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:33.000 22:27:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:33.000 22:27:31 -- common/autotest_common.sh@889 -- # local i 00:19:33.000 22:27:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:33.000 22:27:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:33.000 22:27:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.259 22:27:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:33.517 [ 00:19:33.517 { 00:19:33.517 "name": "BaseBdev4", 00:19:33.517 "aliases": [ 00:19:33.517 "fb62f2a5-2c4f-47c2-be35-cd65705ca949" 00:19:33.517 ], 00:19:33.517 "product_name": "Malloc disk", 00:19:33.517 "block_size": 512, 00:19:33.517 "num_blocks": 65536, 00:19:33.517 "uuid": "fb62f2a5-2c4f-47c2-be35-cd65705ca949", 00:19:33.517 "assigned_rate_limits": { 00:19:33.517 "rw_ios_per_sec": 0, 00:19:33.517 "rw_mbytes_per_sec": 0, 00:19:33.517 "r_mbytes_per_sec": 0, 00:19:33.517 "w_mbytes_per_sec": 0 00:19:33.517 }, 00:19:33.517 "claimed": true, 00:19:33.517 "claim_type": "exclusive_write", 00:19:33.517 "zoned": false, 00:19:33.517 "supported_io_types": { 00:19:33.517 "read": true, 00:19:33.517 "write": true, 00:19:33.517 "unmap": true, 00:19:33.517 "write_zeroes": true, 00:19:33.517 "flush": true, 00:19:33.517 "reset": true, 00:19:33.517 "compare": false, 00:19:33.517 "compare_and_write": false, 00:19:33.517 "abort": true, 00:19:33.517 "nvme_admin": false, 00:19:33.517 "nvme_io": false 00:19:33.517 }, 00:19:33.517 "memory_domains": [ 00:19:33.517 { 00:19:33.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.517 "dma_device_type": 2 00:19:33.517 } 00:19:33.517 ], 00:19:33.517 "driver_specific": {} 00:19:33.517 } 00:19:33.518 ] 00:19:33.518 22:27:32 -- common/autotest_common.sh@895 -- # return 0 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.518 22:27:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.776 22:27:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.777 "name": "Existed_Raid", 00:19:33.777 "uuid": "cf0fc776-3819-4b77-833c-72dedff4feec", 00:19:33.777 "strip_size_kb": 64, 00:19:33.777 "state": "online", 00:19:33.777 "raid_level": "concat", 00:19:33.777 "superblock": true, 00:19:33.777 "num_base_bdevs": 4, 00:19:33.777 "num_base_bdevs_discovered": 4, 00:19:33.777 "num_base_bdevs_operational": 4, 00:19:33.777 "base_bdevs_list": [ 00:19:33.777 { 00:19:33.777 "name": "BaseBdev1", 00:19:33.777 "uuid": "dbafa84b-defc-424f-a0c7-f6373823e8bb", 00:19:33.777 "is_configured": true, 00:19:33.777 "data_offset": 2048, 00:19:33.777 "data_size": 63488 00:19:33.777 }, 00:19:33.777 { 00:19:33.777 "name": "BaseBdev2", 00:19:33.777 "uuid": "f11b8ff1-5870-4dd4-bfb4-6411777504b5", 00:19:33.777 "is_configured": true, 00:19:33.777 "data_offset": 2048, 00:19:33.777 "data_size": 63488 00:19:33.777 }, 00:19:33.777 { 00:19:33.777 "name": "BaseBdev3", 00:19:33.777 "uuid": "84a9f741-e97b-4451-b1c7-cee6a51f180c", 00:19:33.777 "is_configured": true, 00:19:33.777 "data_offset": 2048, 00:19:33.777 "data_size": 63488 00:19:33.777 }, 00:19:33.777 { 00:19:33.777 "name": "BaseBdev4", 00:19:33.777 "uuid": "fb62f2a5-2c4f-47c2-be35-cd65705ca949", 00:19:33.777 "is_configured": true, 00:19:33.777 "data_offset": 2048, 00:19:33.777 "data_size": 63488 00:19:33.777 } 00:19:33.777 ] 00:19:33.777 }' 00:19:33.777 22:27:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.777 22:27:32 -- common/autotest_common.sh@10 -- # set +x 00:19:34.342 22:27:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:34.599 [2024-07-14 22:27:33.428551] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:34.599 [2024-07-14 22:27:33.428600] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.599 [2024-07-14 22:27:33.428666] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.599 22:27:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.857 22:27:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.857 "name": "Existed_Raid", 00:19:34.857 "uuid": "cf0fc776-3819-4b77-833c-72dedff4feec", 00:19:34.857 "strip_size_kb": 64, 00:19:34.857 "state": "offline", 00:19:34.857 "raid_level": "concat", 00:19:34.857 "superblock": true, 00:19:34.857 "num_base_bdevs": 4, 00:19:34.857 "num_base_bdevs_discovered": 3, 00:19:34.857 "num_base_bdevs_operational": 3, 00:19:34.857 "base_bdevs_list": [ 00:19:34.857 { 00:19:34.857 "name": null, 00:19:34.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.857 "is_configured": false, 00:19:34.857 "data_offset": 2048, 00:19:34.857 "data_size": 63488 00:19:34.857 }, 00:19:34.857 { 00:19:34.857 "name": "BaseBdev2", 00:19:34.857 "uuid": "f11b8ff1-5870-4dd4-bfb4-6411777504b5", 00:19:34.857 "is_configured": true, 00:19:34.857 "data_offset": 2048, 00:19:34.857 "data_size": 63488 00:19:34.857 }, 00:19:34.857 { 00:19:34.857 "name": "BaseBdev3", 00:19:34.857 "uuid": "84a9f741-e97b-4451-b1c7-cee6a51f180c", 00:19:34.857 "is_configured": true, 00:19:34.857 "data_offset": 2048, 00:19:34.857 "data_size": 63488 00:19:34.857 }, 00:19:34.857 { 00:19:34.857 "name": "BaseBdev4", 00:19:34.857 "uuid": "fb62f2a5-2c4f-47c2-be35-cd65705ca949", 00:19:34.857 "is_configured": true, 00:19:34.857 "data_offset": 2048, 00:19:34.857 "data_size": 63488 00:19:34.857 } 00:19:34.857 ] 00:19:34.857 }' 00:19:34.857 22:27:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.857 22:27:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.421 22:27:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:35.421 22:27:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:35.421 22:27:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.421 22:27:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:35.679 22:27:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:35.679 22:27:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.679 22:27:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:35.937 [2024-07-14 22:27:34.868334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:35.937 22:27:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:35.937 22:27:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:35.937 22:27:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.937 22:27:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:36.195 22:27:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:36.195 22:27:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.195 22:27:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:36.453 [2024-07-14 22:27:35.371145] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:36.453 22:27:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:36.453 22:27:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:36.453 22:27:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.453 22:27:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:36.711 22:27:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:36.711 22:27:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.711 22:27:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:36.969 [2024-07-14 22:27:35.914114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:36.969 [2024-07-14 22:27:35.914176] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:36.969 22:27:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:36.969 22:27:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:36.969 22:27:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.969 22:27:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:37.227 22:27:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:37.227 22:27:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:37.227 22:27:36 -- bdev/bdev_raid.sh@287 -- # killprocess 123098 00:19:37.227 22:27:36 -- common/autotest_common.sh@926 -- # '[' -z 123098 ']' 00:19:37.227 22:27:36 -- common/autotest_common.sh@930 -- # kill -0 123098 00:19:37.227 22:27:36 -- common/autotest_common.sh@931 -- # uname 00:19:37.227 22:27:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.227 22:27:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123098 00:19:37.227 22:27:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:37.227 killing process with pid 123098 00:19:37.227 22:27:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:37.227 22:27:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123098' 00:19:37.228 22:27:36 -- common/autotest_common.sh@945 -- # kill 123098 00:19:37.228 22:27:36 -- common/autotest_common.sh@950 -- # wait 123098 00:19:37.228 [2024-07-14 22:27:36.253924] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.228 [2024-07-14 22:27:36.254021] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.160 ************************************ 00:19:38.160 END TEST raid_state_function_test_sb 00:19:38.160 ************************************ 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:38.160 00:19:38.160 real 0m14.487s 00:19:38.160 user 0m26.103s 00:19:38.160 sys 0m1.629s 00:19:38.160 22:27:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.160 22:27:37 -- common/autotest_common.sh@10 -- # set +x 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:38.160 22:27:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:38.160 22:27:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.160 22:27:37 -- common/autotest_common.sh@10 -- # set +x 00:19:38.160 ************************************ 00:19:38.160 START TEST raid_superblock_test 00:19:38.160 ************************************ 00:19:38.160 22:27:37 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@357 -- # raid_pid=123558 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123558 /var/tmp/spdk-raid.sock 00:19:38.160 22:27:37 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:38.160 22:27:37 -- common/autotest_common.sh@819 -- # '[' -z 123558 ']' 00:19:38.160 22:27:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:38.160 22:27:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:38.160 22:27:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:38.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:38.160 22:27:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:38.160 22:27:37 -- common/autotest_common.sh@10 -- # set +x 00:19:38.415 [2024-07-14 22:27:37.266693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:38.415 [2024-07-14 22:27:37.266844] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123558 ] 00:19:38.415 [2024-07-14 22:27:37.410932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.692 [2024-07-14 22:27:37.574921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.692 [2024-07-14 22:27:37.736930] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.271 22:27:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:39.271 22:27:38 -- common/autotest_common.sh@852 -- # return 0 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:39.271 22:27:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:39.528 malloc1 00:19:39.528 22:27:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.528 [2024-07-14 22:27:38.581946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.528 [2024-07-14 22:27:38.582024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.528 [2024-07-14 22:27:38.582053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:39.528 [2024-07-14 22:27:38.582095] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.528 [2024-07-14 22:27:38.583994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.528 [2024-07-14 22:27:38.584039] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.529 pt1 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:39.785 malloc2 00:19:39.785 22:27:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.042 [2024-07-14 22:27:39.092889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.042 [2024-07-14 22:27:39.092957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.042 [2024-07-14 22:27:39.092995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:40.042 [2024-07-14 22:27:39.093047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.042 [2024-07-14 22:27:39.095111] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.042 [2024-07-14 22:27:39.095175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.042 pt2 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:40.299 malloc3 00:19:40.299 22:27:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:40.557 [2024-07-14 22:27:39.513115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:40.557 [2024-07-14 22:27:39.513180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.557 [2024-07-14 22:27:39.513215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:40.557 [2024-07-14 22:27:39.513252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.557 [2024-07-14 22:27:39.515317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.557 [2024-07-14 22:27:39.515368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:40.557 pt3 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.557 22:27:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:40.815 malloc4 00:19:40.815 22:27:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:41.073 [2024-07-14 22:27:39.933257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:41.073 [2024-07-14 22:27:39.933331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.073 [2024-07-14 22:27:39.933369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:41.073 [2024-07-14 22:27:39.933406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.073 [2024-07-14 22:27:39.935482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.073 [2024-07-14 22:27:39.935532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:41.073 pt4 00:19:41.073 22:27:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:41.073 22:27:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:41.073 22:27:39 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:41.073 [2024-07-14 22:27:40.113346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.073 [2024-07-14 22:27:40.114841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.073 [2024-07-14 22:27:40.114909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:41.073 [2024-07-14 22:27:40.114980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:41.073 [2024-07-14 22:27:40.115176] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:41.073 [2024-07-14 22:27:40.115199] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:41.073 [2024-07-14 22:27:40.115305] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:41.073 [2024-07-14 22:27:40.115651] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:41.073 [2024-07-14 22:27:40.115675] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:41.073 [2024-07-14 22:27:40.115844] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.073 22:27:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.332 22:27:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.332 "name": "raid_bdev1", 00:19:41.332 "uuid": "e11f6bb8-c212-407e-a33f-0d674e95b610", 00:19:41.332 "strip_size_kb": 64, 00:19:41.332 "state": "online", 00:19:41.332 "raid_level": "concat", 00:19:41.332 "superblock": true, 00:19:41.332 "num_base_bdevs": 4, 00:19:41.332 "num_base_bdevs_discovered": 4, 00:19:41.332 "num_base_bdevs_operational": 4, 00:19:41.332 "base_bdevs_list": [ 00:19:41.332 { 00:19:41.332 "name": "pt1", 00:19:41.332 "uuid": "8b41130d-31c8-5608-a226-e87f3094c595", 00:19:41.332 "is_configured": true, 00:19:41.332 "data_offset": 2048, 00:19:41.332 "data_size": 63488 00:19:41.332 }, 00:19:41.332 { 00:19:41.332 "name": "pt2", 00:19:41.332 "uuid": "8b6d05ad-2d7f-5b7a-bff0-50e83f284bd3", 00:19:41.332 "is_configured": true, 00:19:41.332 "data_offset": 2048, 00:19:41.332 "data_size": 63488 00:19:41.332 }, 00:19:41.332 { 00:19:41.332 "name": "pt3", 00:19:41.332 "uuid": "6b9e5ee7-c565-52b5-8ef1-fb480fdd6c75", 00:19:41.332 "is_configured": true, 00:19:41.332 "data_offset": 2048, 00:19:41.332 "data_size": 63488 00:19:41.332 }, 00:19:41.332 { 00:19:41.332 "name": "pt4", 00:19:41.332 "uuid": "717fd81a-32ae-59d1-b681-2d8ac297b34f", 00:19:41.332 "is_configured": true, 00:19:41.332 "data_offset": 2048, 00:19:41.332 "data_size": 63488 00:19:41.332 } 00:19:41.332 ] 00:19:41.332 }' 00:19:41.332 22:27:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.332 22:27:40 -- common/autotest_common.sh@10 -- # set +x 00:19:41.898 22:27:40 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:41.898 22:27:40 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:42.156 [2024-07-14 22:27:41.125659] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.156 22:27:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e11f6bb8-c212-407e-a33f-0d674e95b610 00:19:42.156 22:27:41 -- bdev/bdev_raid.sh@380 -- # '[' -z e11f6bb8-c212-407e-a33f-0d674e95b610 ']' 00:19:42.156 22:27:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:42.414 [2024-07-14 22:27:41.365515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.414 [2024-07-14 22:27:41.365542] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.414 [2024-07-14 22:27:41.365601] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.414 [2024-07-14 22:27:41.365659] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.414 [2024-07-14 22:27:41.365668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:42.414 22:27:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:42.414 22:27:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.672 22:27:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:42.672 22:27:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:42.672 22:27:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:42.672 22:27:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:42.930 22:27:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:42.930 22:27:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:43.187 22:27:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:43.187 22:27:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:43.187 22:27:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:43.187 22:27:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:43.445 22:27:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:43.445 22:27:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:43.704 22:27:42 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:43.704 22:27:42 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:43.704 22:27:42 -- common/autotest_common.sh@640 -- # local es=0 00:19:43.704 22:27:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:43.704 22:27:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.704 22:27:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:43.704 22:27:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.704 22:27:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:43.704 22:27:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.704 22:27:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:43.704 22:27:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.704 22:27:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:43.704 22:27:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:43.704 [2024-07-14 22:27:42.733739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:43.704 [2024-07-14 22:27:42.735292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:43.704 [2024-07-14 22:27:42.735345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:43.704 [2024-07-14 22:27:42.735387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:43.704 [2024-07-14 22:27:42.735433] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:43.704 [2024-07-14 22:27:42.735492] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:43.704 [2024-07-14 22:27:42.735540] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:43.704 [2024-07-14 22:27:42.735610] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:43.704 [2024-07-14 22:27:42.735636] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.704 [2024-07-14 22:27:42.735645] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:43.704 request: 00:19:43.704 { 00:19:43.704 "name": "raid_bdev1", 00:19:43.704 "raid_level": "concat", 00:19:43.704 "base_bdevs": [ 00:19:43.704 "malloc1", 00:19:43.704 "malloc2", 00:19:43.704 "malloc3", 00:19:43.704 "malloc4" 00:19:43.704 ], 00:19:43.704 "superblock": false, 00:19:43.704 "strip_size_kb": 64, 00:19:43.704 "method": "bdev_raid_create", 00:19:43.704 "req_id": 1 00:19:43.704 } 00:19:43.704 Got JSON-RPC error response 00:19:43.704 response: 00:19:43.704 { 00:19:43.704 "code": -17, 00:19:43.704 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:43.704 } 00:19:43.704 22:27:42 -- common/autotest_common.sh@643 -- # es=1 00:19:43.704 22:27:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:43.704 22:27:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:43.704 22:27:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:43.704 22:27:42 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:43.704 22:27:42 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.963 22:27:42 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:43.963 22:27:42 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:43.963 22:27:42 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:44.221 [2024-07-14 22:27:43.101759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:44.221 [2024-07-14 22:27:43.101825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.221 [2024-07-14 22:27:43.101852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:44.221 [2024-07-14 22:27:43.101873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.221 [2024-07-14 22:27:43.103670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.221 [2024-07-14 22:27:43.103731] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:44.221 [2024-07-14 22:27:43.103843] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:44.221 [2024-07-14 22:27:43.103903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:44.221 pt1 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.221 22:27:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.222 22:27:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.222 22:27:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.222 22:27:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.222 22:27:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.479 22:27:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.479 "name": "raid_bdev1", 00:19:44.479 "uuid": "e11f6bb8-c212-407e-a33f-0d674e95b610", 00:19:44.479 "strip_size_kb": 64, 00:19:44.479 "state": "configuring", 00:19:44.479 "raid_level": "concat", 00:19:44.479 "superblock": true, 00:19:44.479 "num_base_bdevs": 4, 00:19:44.479 "num_base_bdevs_discovered": 1, 00:19:44.479 "num_base_bdevs_operational": 4, 00:19:44.479 "base_bdevs_list": [ 00:19:44.479 { 00:19:44.479 "name": "pt1", 00:19:44.479 "uuid": "8b41130d-31c8-5608-a226-e87f3094c595", 00:19:44.479 "is_configured": true, 00:19:44.479 "data_offset": 2048, 00:19:44.479 "data_size": 63488 00:19:44.479 }, 00:19:44.479 { 00:19:44.479 "name": null, 00:19:44.479 "uuid": "8b6d05ad-2d7f-5b7a-bff0-50e83f284bd3", 00:19:44.479 "is_configured": false, 00:19:44.479 "data_offset": 2048, 00:19:44.479 "data_size": 63488 00:19:44.479 }, 00:19:44.479 { 00:19:44.479 "name": null, 00:19:44.479 "uuid": "6b9e5ee7-c565-52b5-8ef1-fb480fdd6c75", 00:19:44.479 "is_configured": false, 00:19:44.479 "data_offset": 2048, 00:19:44.479 "data_size": 63488 00:19:44.479 }, 00:19:44.479 { 00:19:44.479 "name": null, 00:19:44.479 "uuid": "717fd81a-32ae-59d1-b681-2d8ac297b34f", 00:19:44.479 "is_configured": false, 00:19:44.479 "data_offset": 2048, 00:19:44.479 "data_size": 63488 00:19:44.479 } 00:19:44.479 ] 00:19:44.479 }' 00:19:44.479 22:27:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.479 22:27:43 -- common/autotest_common.sh@10 -- # set +x 00:19:45.041 22:27:43 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:45.041 22:27:43 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:45.299 [2024-07-14 22:27:44.113983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:45.299 [2024-07-14 22:27:44.114037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.299 [2024-07-14 22:27:44.114068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:45.299 [2024-07-14 22:27:44.114084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.299 [2024-07-14 22:27:44.114470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.299 [2024-07-14 22:27:44.114523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:45.299 [2024-07-14 22:27:44.114608] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:45.299 [2024-07-14 22:27:44.114632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:45.299 pt2 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:45.299 [2024-07-14 22:27:44.318029] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.299 22:27:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.557 22:27:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.557 "name": "raid_bdev1", 00:19:45.557 "uuid": "e11f6bb8-c212-407e-a33f-0d674e95b610", 00:19:45.557 "strip_size_kb": 64, 00:19:45.557 "state": "configuring", 00:19:45.557 "raid_level": "concat", 00:19:45.557 "superblock": true, 00:19:45.557 "num_base_bdevs": 4, 00:19:45.557 "num_base_bdevs_discovered": 1, 00:19:45.557 "num_base_bdevs_operational": 4, 00:19:45.557 "base_bdevs_list": [ 00:19:45.557 { 00:19:45.557 "name": "pt1", 00:19:45.557 "uuid": "8b41130d-31c8-5608-a226-e87f3094c595", 00:19:45.557 "is_configured": true, 00:19:45.557 "data_offset": 2048, 00:19:45.557 "data_size": 63488 00:19:45.557 }, 00:19:45.557 { 00:19:45.557 "name": null, 00:19:45.557 "uuid": "8b6d05ad-2d7f-5b7a-bff0-50e83f284bd3", 00:19:45.557 "is_configured": false, 00:19:45.557 "data_offset": 2048, 00:19:45.557 "data_size": 63488 00:19:45.557 }, 00:19:45.557 { 00:19:45.557 "name": null, 00:19:45.557 "uuid": "6b9e5ee7-c565-52b5-8ef1-fb480fdd6c75", 00:19:45.557 "is_configured": false, 00:19:45.557 "data_offset": 2048, 00:19:45.557 "data_size": 63488 00:19:45.557 }, 00:19:45.557 { 00:19:45.557 "name": null, 00:19:45.557 "uuid": "717fd81a-32ae-59d1-b681-2d8ac297b34f", 00:19:45.557 "is_configured": false, 00:19:45.557 "data_offset": 2048, 00:19:45.557 "data_size": 63488 00:19:45.557 } 00:19:45.557 ] 00:19:45.557 }' 00:19:45.557 22:27:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.557 22:27:44 -- common/autotest_common.sh@10 -- # set +x 00:19:46.122 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:46.122 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:46.122 22:27:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:46.381 [2024-07-14 22:27:45.326232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:46.381 [2024-07-14 22:27:45.326289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.381 [2024-07-14 22:27:45.326323] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:46.381 [2024-07-14 22:27:45.326341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.381 [2024-07-14 22:27:45.326727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.381 [2024-07-14 22:27:45.326785] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:46.381 [2024-07-14 22:27:45.326864] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:46.381 [2024-07-14 22:27:45.326917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:46.381 pt2 00:19:46.381 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:46.381 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:46.381 22:27:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:46.639 [2024-07-14 22:27:45.530266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:46.639 [2024-07-14 22:27:45.530321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.639 [2024-07-14 22:27:45.530345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:46.639 [2024-07-14 22:27:45.530365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.639 [2024-07-14 22:27:45.530740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.639 [2024-07-14 22:27:45.530801] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:46.639 [2024-07-14 22:27:45.530879] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:46.639 [2024-07-14 22:27:45.530901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:46.639 pt3 00:19:46.639 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:46.639 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:46.639 22:27:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:46.897 [2024-07-14 22:27:45.718303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:46.897 [2024-07-14 22:27:45.718374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.897 [2024-07-14 22:27:45.718405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:46.897 [2024-07-14 22:27:45.718428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.897 [2024-07-14 22:27:45.718805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.897 [2024-07-14 22:27:45.718859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:46.897 [2024-07-14 22:27:45.718940] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:46.897 [2024-07-14 22:27:45.718980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:46.897 [2024-07-14 22:27:45.719092] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:46.897 [2024-07-14 22:27:45.719113] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:46.897 [2024-07-14 22:27:45.719203] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:46.897 [2024-07-14 22:27:45.719493] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:46.897 [2024-07-14 22:27:45.719516] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:46.897 [2024-07-14 22:27:45.719632] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.897 pt4 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.897 "name": "raid_bdev1", 00:19:46.897 "uuid": "e11f6bb8-c212-407e-a33f-0d674e95b610", 00:19:46.897 "strip_size_kb": 64, 00:19:46.897 "state": "online", 00:19:46.897 "raid_level": "concat", 00:19:46.897 "superblock": true, 00:19:46.897 "num_base_bdevs": 4, 00:19:46.897 "num_base_bdevs_discovered": 4, 00:19:46.897 "num_base_bdevs_operational": 4, 00:19:46.897 "base_bdevs_list": [ 00:19:46.897 { 00:19:46.897 "name": "pt1", 00:19:46.897 "uuid": "8b41130d-31c8-5608-a226-e87f3094c595", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 }, 00:19:46.897 { 00:19:46.897 "name": "pt2", 00:19:46.897 "uuid": "8b6d05ad-2d7f-5b7a-bff0-50e83f284bd3", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 }, 00:19:46.897 { 00:19:46.897 "name": "pt3", 00:19:46.897 "uuid": "6b9e5ee7-c565-52b5-8ef1-fb480fdd6c75", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 }, 00:19:46.897 { 00:19:46.897 "name": "pt4", 00:19:46.897 "uuid": "717fd81a-32ae-59d1-b681-2d8ac297b34f", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 } 00:19:46.897 ] 00:19:46.897 }' 00:19:46.897 22:27:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.897 22:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:47.831 [2024-07-14 22:27:46.790696] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@430 -- # '[' e11f6bb8-c212-407e-a33f-0d674e95b610 '!=' e11f6bb8-c212-407e-a33f-0d674e95b610 ']' 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:47.831 22:27:46 -- bdev/bdev_raid.sh@511 -- # killprocess 123558 00:19:47.831 22:27:46 -- common/autotest_common.sh@926 -- # '[' -z 123558 ']' 00:19:47.831 22:27:46 -- common/autotest_common.sh@930 -- # kill -0 123558 00:19:47.831 22:27:46 -- common/autotest_common.sh@931 -- # uname 00:19:47.831 22:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:47.831 22:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123558 00:19:47.831 killing process with pid 123558 00:19:47.831 22:27:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:47.831 22:27:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:47.831 22:27:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123558' 00:19:47.831 22:27:46 -- common/autotest_common.sh@945 -- # kill 123558 00:19:47.831 22:27:46 -- common/autotest_common.sh@950 -- # wait 123558 00:19:47.831 [2024-07-14 22:27:46.825657] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:47.831 [2024-07-14 22:27:46.825704] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.831 [2024-07-14 22:27:46.825753] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.831 [2024-07-14 22:27:46.825762] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:48.089 [2024-07-14 22:27:47.079471] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.023 ************************************ 00:19:49.023 END TEST raid_superblock_test 00:19:49.023 ************************************ 00:19:49.023 22:27:47 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:49.023 00:19:49.023 real 0m10.766s 00:19:49.023 user 0m18.638s 00:19:49.023 sys 0m1.332s 00:19:49.023 22:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.023 22:27:47 -- common/autotest_common.sh@10 -- # set +x 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:49.023 22:27:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:49.023 22:27:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.023 22:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.023 ************************************ 00:19:49.023 START TEST raid_state_function_test 00:19:49.023 ************************************ 00:19:49.023 22:27:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=123887 00:19:49.023 Process raid pid: 123887 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123887' 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123887 /var/tmp/spdk-raid.sock 00:19:49.023 22:27:48 -- common/autotest_common.sh@819 -- # '[' -z 123887 ']' 00:19:49.023 22:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:49.023 22:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:49.023 22:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:49.023 22:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.023 22:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.023 22:27:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:49.281 [2024-07-14 22:27:48.116242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:49.281 [2024-07-14 22:27:48.116462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.281 [2024-07-14 22:27:48.283473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.538 [2024-07-14 22:27:48.453146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.795 [2024-07-14 22:27:48.619747] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.053 22:27:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.053 22:27:49 -- common/autotest_common.sh@852 -- # return 0 00:19:50.053 22:27:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:50.312 [2024-07-14 22:27:49.247845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.312 [2024-07-14 22:27:49.247913] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.312 [2024-07-14 22:27:49.247925] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.312 [2024-07-14 22:27:49.247944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.312 [2024-07-14 22:27:49.247951] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:50.312 [2024-07-14 22:27:49.247982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:50.312 [2024-07-14 22:27:49.247990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:50.312 [2024-07-14 22:27:49.248009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.312 22:27:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.570 22:27:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.570 "name": "Existed_Raid", 00:19:50.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.570 "strip_size_kb": 0, 00:19:50.570 "state": "configuring", 00:19:50.570 "raid_level": "raid1", 00:19:50.570 "superblock": false, 00:19:50.570 "num_base_bdevs": 4, 00:19:50.570 "num_base_bdevs_discovered": 0, 00:19:50.570 "num_base_bdevs_operational": 4, 00:19:50.570 "base_bdevs_list": [ 00:19:50.570 { 00:19:50.570 "name": "BaseBdev1", 00:19:50.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.570 "is_configured": false, 00:19:50.570 "data_offset": 0, 00:19:50.570 "data_size": 0 00:19:50.570 }, 00:19:50.570 { 00:19:50.570 "name": "BaseBdev2", 00:19:50.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.570 "is_configured": false, 00:19:50.570 "data_offset": 0, 00:19:50.570 "data_size": 0 00:19:50.570 }, 00:19:50.570 { 00:19:50.570 "name": "BaseBdev3", 00:19:50.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.570 "is_configured": false, 00:19:50.570 "data_offset": 0, 00:19:50.570 "data_size": 0 00:19:50.570 }, 00:19:50.570 { 00:19:50.570 "name": "BaseBdev4", 00:19:50.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.570 "is_configured": false, 00:19:50.570 "data_offset": 0, 00:19:50.570 "data_size": 0 00:19:50.570 } 00:19:50.570 ] 00:19:50.570 }' 00:19:50.570 22:27:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.570 22:27:49 -- common/autotest_common.sh@10 -- # set +x 00:19:51.135 22:27:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:51.394 [2024-07-14 22:27:50.383981] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.394 [2024-07-14 22:27:50.384010] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:51.394 22:27:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:51.652 [2024-07-14 22:27:50.568036] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.652 [2024-07-14 22:27:50.568076] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.652 [2024-07-14 22:27:50.568095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.652 [2024-07-14 22:27:50.568122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.652 [2024-07-14 22:27:50.568130] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:51.652 [2024-07-14 22:27:50.568156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:51.652 [2024-07-14 22:27:50.568163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:51.652 [2024-07-14 22:27:50.568189] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:51.652 22:27:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:51.922 [2024-07-14 22:27:50.781071] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.922 BaseBdev1 00:19:51.922 22:27:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:51.922 22:27:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:51.922 22:27:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:51.922 22:27:50 -- common/autotest_common.sh@889 -- # local i 00:19:51.922 22:27:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:51.922 22:27:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:51.922 22:27:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:52.184 22:27:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:52.184 [ 00:19:52.184 { 00:19:52.184 "name": "BaseBdev1", 00:19:52.184 "aliases": [ 00:19:52.184 "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca" 00:19:52.184 ], 00:19:52.184 "product_name": "Malloc disk", 00:19:52.184 "block_size": 512, 00:19:52.184 "num_blocks": 65536, 00:19:52.184 "uuid": "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca", 00:19:52.184 "assigned_rate_limits": { 00:19:52.184 "rw_ios_per_sec": 0, 00:19:52.184 "rw_mbytes_per_sec": 0, 00:19:52.184 "r_mbytes_per_sec": 0, 00:19:52.184 "w_mbytes_per_sec": 0 00:19:52.184 }, 00:19:52.184 "claimed": true, 00:19:52.184 "claim_type": "exclusive_write", 00:19:52.184 "zoned": false, 00:19:52.184 "supported_io_types": { 00:19:52.184 "read": true, 00:19:52.184 "write": true, 00:19:52.184 "unmap": true, 00:19:52.184 "write_zeroes": true, 00:19:52.184 "flush": true, 00:19:52.184 "reset": true, 00:19:52.184 "compare": false, 00:19:52.184 "compare_and_write": false, 00:19:52.184 "abort": true, 00:19:52.184 "nvme_admin": false, 00:19:52.184 "nvme_io": false 00:19:52.184 }, 00:19:52.184 "memory_domains": [ 00:19:52.184 { 00:19:52.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.184 "dma_device_type": 2 00:19:52.184 } 00:19:52.184 ], 00:19:52.184 "driver_specific": {} 00:19:52.184 } 00:19:52.184 ] 00:19:52.184 22:27:51 -- common/autotest_common.sh@895 -- # return 0 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.184 22:27:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.441 22:27:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.441 "name": "Existed_Raid", 00:19:52.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.441 "strip_size_kb": 0, 00:19:52.441 "state": "configuring", 00:19:52.441 "raid_level": "raid1", 00:19:52.441 "superblock": false, 00:19:52.441 "num_base_bdevs": 4, 00:19:52.441 "num_base_bdevs_discovered": 1, 00:19:52.441 "num_base_bdevs_operational": 4, 00:19:52.441 "base_bdevs_list": [ 00:19:52.441 { 00:19:52.441 "name": "BaseBdev1", 00:19:52.441 "uuid": "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca", 00:19:52.441 "is_configured": true, 00:19:52.441 "data_offset": 0, 00:19:52.441 "data_size": 65536 00:19:52.441 }, 00:19:52.441 { 00:19:52.441 "name": "BaseBdev2", 00:19:52.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.441 "is_configured": false, 00:19:52.441 "data_offset": 0, 00:19:52.441 "data_size": 0 00:19:52.441 }, 00:19:52.441 { 00:19:52.441 "name": "BaseBdev3", 00:19:52.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.441 "is_configured": false, 00:19:52.441 "data_offset": 0, 00:19:52.441 "data_size": 0 00:19:52.441 }, 00:19:52.441 { 00:19:52.441 "name": "BaseBdev4", 00:19:52.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.441 "is_configured": false, 00:19:52.441 "data_offset": 0, 00:19:52.441 "data_size": 0 00:19:52.441 } 00:19:52.441 ] 00:19:52.441 }' 00:19:52.441 22:27:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.441 22:27:51 -- common/autotest_common.sh@10 -- # set +x 00:19:53.007 22:27:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:53.265 [2024-07-14 22:27:52.141335] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.265 [2024-07-14 22:27:52.141373] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:53.265 [2024-07-14 22:27:52.317405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.265 [2024-07-14 22:27:52.318863] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.265 [2024-07-14 22:27:52.318935] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.265 [2024-07-14 22:27:52.318947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.265 [2024-07-14 22:27:52.318968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.265 [2024-07-14 22:27:52.318976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:53.265 [2024-07-14 22:27:52.318991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.265 22:27:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.524 22:27:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.524 22:27:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.524 22:27:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.524 "name": "Existed_Raid", 00:19:53.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.524 "strip_size_kb": 0, 00:19:53.524 "state": "configuring", 00:19:53.524 "raid_level": "raid1", 00:19:53.524 "superblock": false, 00:19:53.524 "num_base_bdevs": 4, 00:19:53.524 "num_base_bdevs_discovered": 1, 00:19:53.524 "num_base_bdevs_operational": 4, 00:19:53.524 "base_bdevs_list": [ 00:19:53.524 { 00:19:53.524 "name": "BaseBdev1", 00:19:53.524 "uuid": "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca", 00:19:53.524 "is_configured": true, 00:19:53.524 "data_offset": 0, 00:19:53.524 "data_size": 65536 00:19:53.524 }, 00:19:53.524 { 00:19:53.524 "name": "BaseBdev2", 00:19:53.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.524 "is_configured": false, 00:19:53.524 "data_offset": 0, 00:19:53.524 "data_size": 0 00:19:53.524 }, 00:19:53.524 { 00:19:53.524 "name": "BaseBdev3", 00:19:53.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.524 "is_configured": false, 00:19:53.524 "data_offset": 0, 00:19:53.524 "data_size": 0 00:19:53.524 }, 00:19:53.524 { 00:19:53.524 "name": "BaseBdev4", 00:19:53.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.524 "is_configured": false, 00:19:53.524 "data_offset": 0, 00:19:53.524 "data_size": 0 00:19:53.524 } 00:19:53.524 ] 00:19:53.524 }' 00:19:53.524 22:27:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.524 22:27:52 -- common/autotest_common.sh@10 -- # set +x 00:19:54.091 22:27:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:54.356 [2024-07-14 22:27:53.356168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.356 BaseBdev2 00:19:54.356 22:27:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:54.356 22:27:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:54.356 22:27:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:54.356 22:27:53 -- common/autotest_common.sh@889 -- # local i 00:19:54.356 22:27:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:54.356 22:27:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:54.356 22:27:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.619 22:27:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:54.876 [ 00:19:54.876 { 00:19:54.876 "name": "BaseBdev2", 00:19:54.876 "aliases": [ 00:19:54.876 "68e26df4-d81c-4675-9e72-48a9d3629af8" 00:19:54.876 ], 00:19:54.876 "product_name": "Malloc disk", 00:19:54.876 "block_size": 512, 00:19:54.876 "num_blocks": 65536, 00:19:54.876 "uuid": "68e26df4-d81c-4675-9e72-48a9d3629af8", 00:19:54.876 "assigned_rate_limits": { 00:19:54.876 "rw_ios_per_sec": 0, 00:19:54.876 "rw_mbytes_per_sec": 0, 00:19:54.876 "r_mbytes_per_sec": 0, 00:19:54.876 "w_mbytes_per_sec": 0 00:19:54.876 }, 00:19:54.876 "claimed": true, 00:19:54.876 "claim_type": "exclusive_write", 00:19:54.876 "zoned": false, 00:19:54.876 "supported_io_types": { 00:19:54.876 "read": true, 00:19:54.876 "write": true, 00:19:54.876 "unmap": true, 00:19:54.876 "write_zeroes": true, 00:19:54.876 "flush": true, 00:19:54.876 "reset": true, 00:19:54.876 "compare": false, 00:19:54.876 "compare_and_write": false, 00:19:54.876 "abort": true, 00:19:54.876 "nvme_admin": false, 00:19:54.876 "nvme_io": false 00:19:54.876 }, 00:19:54.876 "memory_domains": [ 00:19:54.876 { 00:19:54.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.876 "dma_device_type": 2 00:19:54.876 } 00:19:54.876 ], 00:19:54.876 "driver_specific": {} 00:19:54.876 } 00:19:54.876 ] 00:19:54.876 22:27:53 -- common/autotest_common.sh@895 -- # return 0 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.876 22:27:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.133 22:27:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.133 "name": "Existed_Raid", 00:19:55.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.133 "strip_size_kb": 0, 00:19:55.133 "state": "configuring", 00:19:55.133 "raid_level": "raid1", 00:19:55.133 "superblock": false, 00:19:55.133 "num_base_bdevs": 4, 00:19:55.133 "num_base_bdevs_discovered": 2, 00:19:55.133 "num_base_bdevs_operational": 4, 00:19:55.133 "base_bdevs_list": [ 00:19:55.133 { 00:19:55.133 "name": "BaseBdev1", 00:19:55.133 "uuid": "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca", 00:19:55.133 "is_configured": true, 00:19:55.133 "data_offset": 0, 00:19:55.133 "data_size": 65536 00:19:55.133 }, 00:19:55.133 { 00:19:55.133 "name": "BaseBdev2", 00:19:55.133 "uuid": "68e26df4-d81c-4675-9e72-48a9d3629af8", 00:19:55.133 "is_configured": true, 00:19:55.133 "data_offset": 0, 00:19:55.133 "data_size": 65536 00:19:55.133 }, 00:19:55.133 { 00:19:55.133 "name": "BaseBdev3", 00:19:55.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.133 "is_configured": false, 00:19:55.133 "data_offset": 0, 00:19:55.133 "data_size": 0 00:19:55.133 }, 00:19:55.133 { 00:19:55.133 "name": "BaseBdev4", 00:19:55.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.133 "is_configured": false, 00:19:55.133 "data_offset": 0, 00:19:55.133 "data_size": 0 00:19:55.133 } 00:19:55.133 ] 00:19:55.133 }' 00:19:55.133 22:27:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.133 22:27:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 22:27:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:55.956 [2024-07-14 22:27:54.804099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.956 BaseBdev3 00:19:55.956 22:27:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:55.956 22:27:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:55.956 22:27:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:55.956 22:27:54 -- common/autotest_common.sh@889 -- # local i 00:19:55.956 22:27:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:55.956 22:27:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:55.956 22:27:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.956 22:27:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:56.213 [ 00:19:56.213 { 00:19:56.213 "name": "BaseBdev3", 00:19:56.213 "aliases": [ 00:19:56.213 "8afda790-4a46-4205-ad5b-14c7501ce95b" 00:19:56.213 ], 00:19:56.213 "product_name": "Malloc disk", 00:19:56.213 "block_size": 512, 00:19:56.213 "num_blocks": 65536, 00:19:56.213 "uuid": "8afda790-4a46-4205-ad5b-14c7501ce95b", 00:19:56.213 "assigned_rate_limits": { 00:19:56.213 "rw_ios_per_sec": 0, 00:19:56.213 "rw_mbytes_per_sec": 0, 00:19:56.213 "r_mbytes_per_sec": 0, 00:19:56.213 "w_mbytes_per_sec": 0 00:19:56.213 }, 00:19:56.213 "claimed": true, 00:19:56.213 "claim_type": "exclusive_write", 00:19:56.213 "zoned": false, 00:19:56.213 "supported_io_types": { 00:19:56.213 "read": true, 00:19:56.213 "write": true, 00:19:56.213 "unmap": true, 00:19:56.213 "write_zeroes": true, 00:19:56.213 "flush": true, 00:19:56.213 "reset": true, 00:19:56.213 "compare": false, 00:19:56.213 "compare_and_write": false, 00:19:56.213 "abort": true, 00:19:56.213 "nvme_admin": false, 00:19:56.213 "nvme_io": false 00:19:56.213 }, 00:19:56.213 "memory_domains": [ 00:19:56.213 { 00:19:56.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.213 "dma_device_type": 2 00:19:56.213 } 00:19:56.213 ], 00:19:56.213 "driver_specific": {} 00:19:56.214 } 00:19:56.214 ] 00:19:56.214 22:27:55 -- common/autotest_common.sh@895 -- # return 0 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.214 22:27:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.472 22:27:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.472 "name": "Existed_Raid", 00:19:56.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.472 "strip_size_kb": 0, 00:19:56.472 "state": "configuring", 00:19:56.472 "raid_level": "raid1", 00:19:56.472 "superblock": false, 00:19:56.472 "num_base_bdevs": 4, 00:19:56.472 "num_base_bdevs_discovered": 3, 00:19:56.472 "num_base_bdevs_operational": 4, 00:19:56.472 "base_bdevs_list": [ 00:19:56.472 { 00:19:56.472 "name": "BaseBdev1", 00:19:56.472 "uuid": "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca", 00:19:56.472 "is_configured": true, 00:19:56.472 "data_offset": 0, 00:19:56.472 "data_size": 65536 00:19:56.472 }, 00:19:56.472 { 00:19:56.472 "name": "BaseBdev2", 00:19:56.472 "uuid": "68e26df4-d81c-4675-9e72-48a9d3629af8", 00:19:56.472 "is_configured": true, 00:19:56.472 "data_offset": 0, 00:19:56.472 "data_size": 65536 00:19:56.472 }, 00:19:56.472 { 00:19:56.472 "name": "BaseBdev3", 00:19:56.472 "uuid": "8afda790-4a46-4205-ad5b-14c7501ce95b", 00:19:56.472 "is_configured": true, 00:19:56.472 "data_offset": 0, 00:19:56.472 "data_size": 65536 00:19:56.472 }, 00:19:56.472 { 00:19:56.472 "name": "BaseBdev4", 00:19:56.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.472 "is_configured": false, 00:19:56.472 "data_offset": 0, 00:19:56.472 "data_size": 0 00:19:56.472 } 00:19:56.472 ] 00:19:56.472 }' 00:19:56.472 22:27:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.472 22:27:55 -- common/autotest_common.sh@10 -- # set +x 00:19:57.037 22:27:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:57.296 [2024-07-14 22:27:56.300938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:57.296 [2024-07-14 22:27:56.301015] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:57.296 [2024-07-14 22:27:56.301025] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:57.296 [2024-07-14 22:27:56.301175] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:57.296 [2024-07-14 22:27:56.301551] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:57.296 [2024-07-14 22:27:56.301573] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:57.296 BaseBdev4 00:19:57.296 [2024-07-14 22:27:56.301874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.296 22:27:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:57.296 22:27:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:57.296 22:27:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:57.296 22:27:56 -- common/autotest_common.sh@889 -- # local i 00:19:57.296 22:27:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:57.296 22:27:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:57.296 22:27:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.554 22:27:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:57.812 [ 00:19:57.812 { 00:19:57.812 "name": "BaseBdev4", 00:19:57.812 "aliases": [ 00:19:57.812 "b292a79c-4bf1-4a36-bb6c-cb9951709321" 00:19:57.812 ], 00:19:57.812 "product_name": "Malloc disk", 00:19:57.812 "block_size": 512, 00:19:57.812 "num_blocks": 65536, 00:19:57.812 "uuid": "b292a79c-4bf1-4a36-bb6c-cb9951709321", 00:19:57.812 "assigned_rate_limits": { 00:19:57.812 "rw_ios_per_sec": 0, 00:19:57.812 "rw_mbytes_per_sec": 0, 00:19:57.812 "r_mbytes_per_sec": 0, 00:19:57.812 "w_mbytes_per_sec": 0 00:19:57.812 }, 00:19:57.812 "claimed": true, 00:19:57.812 "claim_type": "exclusive_write", 00:19:57.812 "zoned": false, 00:19:57.812 "supported_io_types": { 00:19:57.812 "read": true, 00:19:57.812 "write": true, 00:19:57.812 "unmap": true, 00:19:57.812 "write_zeroes": true, 00:19:57.812 "flush": true, 00:19:57.812 "reset": true, 00:19:57.812 "compare": false, 00:19:57.812 "compare_and_write": false, 00:19:57.812 "abort": true, 00:19:57.812 "nvme_admin": false, 00:19:57.812 "nvme_io": false 00:19:57.812 }, 00:19:57.812 "memory_domains": [ 00:19:57.812 { 00:19:57.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.812 "dma_device_type": 2 00:19:57.812 } 00:19:57.812 ], 00:19:57.812 "driver_specific": {} 00:19:57.812 } 00:19:57.812 ] 00:19:57.812 22:27:56 -- common/autotest_common.sh@895 -- # return 0 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.812 22:27:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.071 22:27:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.071 "name": "Existed_Raid", 00:19:58.071 "uuid": "feeb4c30-a015-40de-a848-2e3426812b93", 00:19:58.071 "strip_size_kb": 0, 00:19:58.071 "state": "online", 00:19:58.071 "raid_level": "raid1", 00:19:58.071 "superblock": false, 00:19:58.071 "num_base_bdevs": 4, 00:19:58.071 "num_base_bdevs_discovered": 4, 00:19:58.071 "num_base_bdevs_operational": 4, 00:19:58.071 "base_bdevs_list": [ 00:19:58.071 { 00:19:58.071 "name": "BaseBdev1", 00:19:58.071 "uuid": "ad86b7e0-a5c5-4bc0-a06a-1ee32dfcf7ca", 00:19:58.071 "is_configured": true, 00:19:58.071 "data_offset": 0, 00:19:58.071 "data_size": 65536 00:19:58.071 }, 00:19:58.071 { 00:19:58.071 "name": "BaseBdev2", 00:19:58.071 "uuid": "68e26df4-d81c-4675-9e72-48a9d3629af8", 00:19:58.071 "is_configured": true, 00:19:58.071 "data_offset": 0, 00:19:58.071 "data_size": 65536 00:19:58.071 }, 00:19:58.071 { 00:19:58.071 "name": "BaseBdev3", 00:19:58.071 "uuid": "8afda790-4a46-4205-ad5b-14c7501ce95b", 00:19:58.071 "is_configured": true, 00:19:58.071 "data_offset": 0, 00:19:58.071 "data_size": 65536 00:19:58.071 }, 00:19:58.071 { 00:19:58.071 "name": "BaseBdev4", 00:19:58.071 "uuid": "b292a79c-4bf1-4a36-bb6c-cb9951709321", 00:19:58.071 "is_configured": true, 00:19:58.071 "data_offset": 0, 00:19:58.071 "data_size": 65536 00:19:58.071 } 00:19:58.071 ] 00:19:58.071 }' 00:19:58.071 22:27:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.071 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.636 22:27:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:58.894 [2024-07-14 22:27:57.749449] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.894 22:27:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.152 22:27:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.152 "name": "Existed_Raid", 00:19:59.152 "uuid": "feeb4c30-a015-40de-a848-2e3426812b93", 00:19:59.152 "strip_size_kb": 0, 00:19:59.152 "state": "online", 00:19:59.152 "raid_level": "raid1", 00:19:59.152 "superblock": false, 00:19:59.152 "num_base_bdevs": 4, 00:19:59.152 "num_base_bdevs_discovered": 3, 00:19:59.152 "num_base_bdevs_operational": 3, 00:19:59.152 "base_bdevs_list": [ 00:19:59.152 { 00:19:59.152 "name": null, 00:19:59.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.152 "is_configured": false, 00:19:59.152 "data_offset": 0, 00:19:59.152 "data_size": 65536 00:19:59.152 }, 00:19:59.152 { 00:19:59.152 "name": "BaseBdev2", 00:19:59.152 "uuid": "68e26df4-d81c-4675-9e72-48a9d3629af8", 00:19:59.152 "is_configured": true, 00:19:59.152 "data_offset": 0, 00:19:59.152 "data_size": 65536 00:19:59.152 }, 00:19:59.152 { 00:19:59.152 "name": "BaseBdev3", 00:19:59.152 "uuid": "8afda790-4a46-4205-ad5b-14c7501ce95b", 00:19:59.152 "is_configured": true, 00:19:59.152 "data_offset": 0, 00:19:59.152 "data_size": 65536 00:19:59.152 }, 00:19:59.152 { 00:19:59.152 "name": "BaseBdev4", 00:19:59.152 "uuid": "b292a79c-4bf1-4a36-bb6c-cb9951709321", 00:19:59.152 "is_configured": true, 00:19:59.152 "data_offset": 0, 00:19:59.152 "data_size": 65536 00:19:59.152 } 00:19:59.152 ] 00:19:59.152 }' 00:19:59.152 22:27:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.152 22:27:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.717 22:27:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:59.717 22:27:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:59.717 22:27:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.717 22:27:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:59.974 22:27:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:59.974 22:27:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:59.974 22:27:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:00.231 [2024-07-14 22:27:59.118776] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:00.231 22:27:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:00.231 22:27:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.231 22:27:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:00.231 22:27:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.489 22:27:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:00.489 22:27:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.489 22:27:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:00.747 [2024-07-14 22:27:59.653518] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:00.747 22:27:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:00.747 22:27:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.747 22:27:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:00.747 22:27:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.004 22:27:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:01.004 22:27:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:01.004 22:27:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:01.262 [2024-07-14 22:28:00.172541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:01.262 [2024-07-14 22:28:00.172577] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.262 [2024-07-14 22:28:00.172633] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.262 [2024-07-14 22:28:00.235438] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.262 [2024-07-14 22:28:00.235469] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:20:01.262 22:28:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:01.262 22:28:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:01.262 22:28:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.262 22:28:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:01.519 22:28:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:01.519 22:28:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:01.519 22:28:00 -- bdev/bdev_raid.sh@287 -- # killprocess 123887 00:20:01.519 22:28:00 -- common/autotest_common.sh@926 -- # '[' -z 123887 ']' 00:20:01.519 22:28:00 -- common/autotest_common.sh@930 -- # kill -0 123887 00:20:01.519 22:28:00 -- common/autotest_common.sh@931 -- # uname 00:20:01.519 22:28:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:01.519 22:28:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123887 00:20:01.519 killing process with pid 123887 00:20:01.519 22:28:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:01.519 22:28:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:01.519 22:28:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123887' 00:20:01.519 22:28:00 -- common/autotest_common.sh@945 -- # kill 123887 00:20:01.519 22:28:00 -- common/autotest_common.sh@950 -- # wait 123887 00:20:01.519 [2024-07-14 22:28:00.454151] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.519 [2024-07-14 22:28:00.454293] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.452 ************************************ 00:20:02.452 END TEST raid_state_function_test 00:20:02.452 ************************************ 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:02.452 00:20:02.452 real 0m13.329s 00:20:02.452 user 0m24.005s 00:20:02.452 sys 0m1.461s 00:20:02.452 22:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.452 22:28:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:20:02.452 22:28:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:02.452 22:28:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.452 22:28:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.452 ************************************ 00:20:02.452 START TEST raid_state_function_test_sb 00:20:02.452 ************************************ 00:20:02.452 22:28:01 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=124355 00:20:02.452 Process raid pid: 124355 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124355' 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:02.452 22:28:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124355 /var/tmp/spdk-raid.sock 00:20:02.452 22:28:01 -- common/autotest_common.sh@819 -- # '[' -z 124355 ']' 00:20:02.452 22:28:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.452 22:28:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.452 22:28:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.452 22:28:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.452 22:28:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.452 [2024-07-14 22:28:01.490375] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:02.452 [2024-07-14 22:28:01.490599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.709 [2024-07-14 22:28:01.659908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.967 [2024-07-14 22:28:01.859256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.967 [2024-07-14 22:28:02.025873] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.534 22:28:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.534 22:28:02 -- common/autotest_common.sh@852 -- # return 0 00:20:03.534 22:28:02 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:03.534 [2024-07-14 22:28:02.587740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.534 [2024-07-14 22:28:02.587836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.534 [2024-07-14 22:28:02.587850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.534 [2024-07-14 22:28:02.587870] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.534 [2024-07-14 22:28:02.587877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:03.534 [2024-07-14 22:28:02.587909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:03.534 [2024-07-14 22:28:02.587918] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:03.534 [2024-07-14 22:28:02.587937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.793 22:28:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.051 22:28:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.051 "name": "Existed_Raid", 00:20:04.051 "uuid": "4262bed4-7c1e-4bf4-ba54-e71e4a96cc40", 00:20:04.051 "strip_size_kb": 0, 00:20:04.051 "state": "configuring", 00:20:04.051 "raid_level": "raid1", 00:20:04.051 "superblock": true, 00:20:04.051 "num_base_bdevs": 4, 00:20:04.051 "num_base_bdevs_discovered": 0, 00:20:04.051 "num_base_bdevs_operational": 4, 00:20:04.051 "base_bdevs_list": [ 00:20:04.051 { 00:20:04.051 "name": "BaseBdev1", 00:20:04.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.051 "is_configured": false, 00:20:04.051 "data_offset": 0, 00:20:04.051 "data_size": 0 00:20:04.051 }, 00:20:04.051 { 00:20:04.051 "name": "BaseBdev2", 00:20:04.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.051 "is_configured": false, 00:20:04.051 "data_offset": 0, 00:20:04.051 "data_size": 0 00:20:04.051 }, 00:20:04.051 { 00:20:04.051 "name": "BaseBdev3", 00:20:04.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.051 "is_configured": false, 00:20:04.051 "data_offset": 0, 00:20:04.051 "data_size": 0 00:20:04.051 }, 00:20:04.051 { 00:20:04.051 "name": "BaseBdev4", 00:20:04.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.051 "is_configured": false, 00:20:04.051 "data_offset": 0, 00:20:04.051 "data_size": 0 00:20:04.051 } 00:20:04.051 ] 00:20:04.051 }' 00:20:04.051 22:28:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.051 22:28:02 -- common/autotest_common.sh@10 -- # set +x 00:20:04.637 22:28:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:04.637 [2024-07-14 22:28:03.671795] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:04.637 [2024-07-14 22:28:03.671828] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:04.637 22:28:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:04.895 [2024-07-14 22:28:03.931889] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.895 [2024-07-14 22:28:03.931954] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.895 [2024-07-14 22:28:03.931980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.895 [2024-07-14 22:28:03.932009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.895 [2024-07-14 22:28:03.932018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:04.895 [2024-07-14 22:28:03.932050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:04.895 [2024-07-14 22:28:03.932057] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:04.895 [2024-07-14 22:28:03.932078] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:04.895 22:28:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:05.153 [2024-07-14 22:28:04.165353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.153 BaseBdev1 00:20:05.153 22:28:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:05.153 22:28:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:05.154 22:28:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:05.154 22:28:04 -- common/autotest_common.sh@889 -- # local i 00:20:05.154 22:28:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:05.154 22:28:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:05.154 22:28:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.412 22:28:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:05.670 [ 00:20:05.670 { 00:20:05.670 "name": "BaseBdev1", 00:20:05.670 "aliases": [ 00:20:05.670 "f3cadd44-16f0-4e2e-86f8-ecb79da5a649" 00:20:05.670 ], 00:20:05.670 "product_name": "Malloc disk", 00:20:05.670 "block_size": 512, 00:20:05.670 "num_blocks": 65536, 00:20:05.670 "uuid": "f3cadd44-16f0-4e2e-86f8-ecb79da5a649", 00:20:05.670 "assigned_rate_limits": { 00:20:05.670 "rw_ios_per_sec": 0, 00:20:05.670 "rw_mbytes_per_sec": 0, 00:20:05.670 "r_mbytes_per_sec": 0, 00:20:05.670 "w_mbytes_per_sec": 0 00:20:05.670 }, 00:20:05.670 "claimed": true, 00:20:05.670 "claim_type": "exclusive_write", 00:20:05.670 "zoned": false, 00:20:05.670 "supported_io_types": { 00:20:05.670 "read": true, 00:20:05.670 "write": true, 00:20:05.670 "unmap": true, 00:20:05.670 "write_zeroes": true, 00:20:05.670 "flush": true, 00:20:05.670 "reset": true, 00:20:05.670 "compare": false, 00:20:05.670 "compare_and_write": false, 00:20:05.670 "abort": true, 00:20:05.670 "nvme_admin": false, 00:20:05.670 "nvme_io": false 00:20:05.670 }, 00:20:05.670 "memory_domains": [ 00:20:05.670 { 00:20:05.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.670 "dma_device_type": 2 00:20:05.670 } 00:20:05.670 ], 00:20:05.670 "driver_specific": {} 00:20:05.670 } 00:20:05.670 ] 00:20:05.670 22:28:04 -- common/autotest_common.sh@895 -- # return 0 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.670 22:28:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.948 22:28:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.948 "name": "Existed_Raid", 00:20:05.948 "uuid": "d72dc420-a59f-44ee-a49d-a2c0bcafa0d4", 00:20:05.948 "strip_size_kb": 0, 00:20:05.948 "state": "configuring", 00:20:05.948 "raid_level": "raid1", 00:20:05.948 "superblock": true, 00:20:05.948 "num_base_bdevs": 4, 00:20:05.948 "num_base_bdevs_discovered": 1, 00:20:05.948 "num_base_bdevs_operational": 4, 00:20:05.948 "base_bdevs_list": [ 00:20:05.948 { 00:20:05.948 "name": "BaseBdev1", 00:20:05.948 "uuid": "f3cadd44-16f0-4e2e-86f8-ecb79da5a649", 00:20:05.948 "is_configured": true, 00:20:05.948 "data_offset": 2048, 00:20:05.948 "data_size": 63488 00:20:05.948 }, 00:20:05.948 { 00:20:05.948 "name": "BaseBdev2", 00:20:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.948 "is_configured": false, 00:20:05.948 "data_offset": 0, 00:20:05.948 "data_size": 0 00:20:05.948 }, 00:20:05.948 { 00:20:05.948 "name": "BaseBdev3", 00:20:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.948 "is_configured": false, 00:20:05.948 "data_offset": 0, 00:20:05.948 "data_size": 0 00:20:05.948 }, 00:20:05.948 { 00:20:05.948 "name": "BaseBdev4", 00:20:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.948 "is_configured": false, 00:20:05.948 "data_offset": 0, 00:20:05.948 "data_size": 0 00:20:05.948 } 00:20:05.948 ] 00:20:05.948 }' 00:20:05.948 22:28:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.948 22:28:04 -- common/autotest_common.sh@10 -- # set +x 00:20:06.518 22:28:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:06.776 [2024-07-14 22:28:05.601621] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:06.776 [2024-07-14 22:28:05.601664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:06.776 22:28:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:06.776 22:28:05 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:07.034 22:28:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:07.034 BaseBdev1 00:20:07.034 22:28:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:07.034 22:28:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:07.034 22:28:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:07.034 22:28:06 -- common/autotest_common.sh@889 -- # local i 00:20:07.034 22:28:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:07.034 22:28:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:07.034 22:28:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.292 22:28:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:07.551 [ 00:20:07.551 { 00:20:07.551 "name": "BaseBdev1", 00:20:07.551 "aliases": [ 00:20:07.551 "0fa7e0e0-1f15-42ca-b373-50dc3a6e9157" 00:20:07.551 ], 00:20:07.551 "product_name": "Malloc disk", 00:20:07.551 "block_size": 512, 00:20:07.551 "num_blocks": 65536, 00:20:07.551 "uuid": "0fa7e0e0-1f15-42ca-b373-50dc3a6e9157", 00:20:07.551 "assigned_rate_limits": { 00:20:07.551 "rw_ios_per_sec": 0, 00:20:07.551 "rw_mbytes_per_sec": 0, 00:20:07.551 "r_mbytes_per_sec": 0, 00:20:07.551 "w_mbytes_per_sec": 0 00:20:07.551 }, 00:20:07.551 "claimed": false, 00:20:07.551 "zoned": false, 00:20:07.551 "supported_io_types": { 00:20:07.551 "read": true, 00:20:07.551 "write": true, 00:20:07.551 "unmap": true, 00:20:07.551 "write_zeroes": true, 00:20:07.551 "flush": true, 00:20:07.551 "reset": true, 00:20:07.551 "compare": false, 00:20:07.551 "compare_and_write": false, 00:20:07.551 "abort": true, 00:20:07.551 "nvme_admin": false, 00:20:07.551 "nvme_io": false 00:20:07.551 }, 00:20:07.551 "memory_domains": [ 00:20:07.551 { 00:20:07.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.551 "dma_device_type": 2 00:20:07.551 } 00:20:07.551 ], 00:20:07.551 "driver_specific": {} 00:20:07.551 } 00:20:07.551 ] 00:20:07.551 22:28:06 -- common/autotest_common.sh@895 -- # return 0 00:20:07.551 22:28:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:07.810 [2024-07-14 22:28:06.643118] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.810 [2024-07-14 22:28:06.644926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.810 [2024-07-14 22:28:06.645002] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.810 [2024-07-14 22:28:06.645030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.810 [2024-07-14 22:28:06.645074] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.810 [2024-07-14 22:28:06.645081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:07.810 [2024-07-14 22:28:06.645104] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.810 22:28:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.068 22:28:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.068 "name": "Existed_Raid", 00:20:08.068 "uuid": "f5a72987-76e6-4e96-bea7-eba204f19ded", 00:20:08.068 "strip_size_kb": 0, 00:20:08.068 "state": "configuring", 00:20:08.068 "raid_level": "raid1", 00:20:08.068 "superblock": true, 00:20:08.068 "num_base_bdevs": 4, 00:20:08.068 "num_base_bdevs_discovered": 1, 00:20:08.068 "num_base_bdevs_operational": 4, 00:20:08.068 "base_bdevs_list": [ 00:20:08.068 { 00:20:08.068 "name": "BaseBdev1", 00:20:08.068 "uuid": "0fa7e0e0-1f15-42ca-b373-50dc3a6e9157", 00:20:08.068 "is_configured": true, 00:20:08.068 "data_offset": 2048, 00:20:08.068 "data_size": 63488 00:20:08.068 }, 00:20:08.069 { 00:20:08.069 "name": "BaseBdev2", 00:20:08.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.069 "is_configured": false, 00:20:08.069 "data_offset": 0, 00:20:08.069 "data_size": 0 00:20:08.069 }, 00:20:08.069 { 00:20:08.069 "name": "BaseBdev3", 00:20:08.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.069 "is_configured": false, 00:20:08.069 "data_offset": 0, 00:20:08.069 "data_size": 0 00:20:08.069 }, 00:20:08.069 { 00:20:08.069 "name": "BaseBdev4", 00:20:08.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.069 "is_configured": false, 00:20:08.069 "data_offset": 0, 00:20:08.069 "data_size": 0 00:20:08.069 } 00:20:08.069 ] 00:20:08.069 }' 00:20:08.069 22:28:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.069 22:28:06 -- common/autotest_common.sh@10 -- # set +x 00:20:08.636 22:28:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:08.894 [2024-07-14 22:28:07.788318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.894 BaseBdev2 00:20:08.894 22:28:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:08.894 22:28:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:08.894 22:28:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:08.894 22:28:07 -- common/autotest_common.sh@889 -- # local i 00:20:08.894 22:28:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:08.894 22:28:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:08.894 22:28:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:09.153 22:28:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:09.153 [ 00:20:09.153 { 00:20:09.153 "name": "BaseBdev2", 00:20:09.153 "aliases": [ 00:20:09.153 "ec9702fc-b099-456e-8d41-19d2287e44c2" 00:20:09.153 ], 00:20:09.153 "product_name": "Malloc disk", 00:20:09.153 "block_size": 512, 00:20:09.153 "num_blocks": 65536, 00:20:09.153 "uuid": "ec9702fc-b099-456e-8d41-19d2287e44c2", 00:20:09.153 "assigned_rate_limits": { 00:20:09.153 "rw_ios_per_sec": 0, 00:20:09.153 "rw_mbytes_per_sec": 0, 00:20:09.153 "r_mbytes_per_sec": 0, 00:20:09.153 "w_mbytes_per_sec": 0 00:20:09.153 }, 00:20:09.153 "claimed": true, 00:20:09.153 "claim_type": "exclusive_write", 00:20:09.153 "zoned": false, 00:20:09.153 "supported_io_types": { 00:20:09.153 "read": true, 00:20:09.153 "write": true, 00:20:09.153 "unmap": true, 00:20:09.153 "write_zeroes": true, 00:20:09.153 "flush": true, 00:20:09.153 "reset": true, 00:20:09.153 "compare": false, 00:20:09.153 "compare_and_write": false, 00:20:09.153 "abort": true, 00:20:09.153 "nvme_admin": false, 00:20:09.153 "nvme_io": false 00:20:09.153 }, 00:20:09.153 "memory_domains": [ 00:20:09.153 { 00:20:09.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.153 "dma_device_type": 2 00:20:09.153 } 00:20:09.153 ], 00:20:09.153 "driver_specific": {} 00:20:09.153 } 00:20:09.153 ] 00:20:09.153 22:28:08 -- common/autotest_common.sh@895 -- # return 0 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.153 22:28:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.412 22:28:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:09.412 "name": "Existed_Raid", 00:20:09.412 "uuid": "f5a72987-76e6-4e96-bea7-eba204f19ded", 00:20:09.412 "strip_size_kb": 0, 00:20:09.412 "state": "configuring", 00:20:09.412 "raid_level": "raid1", 00:20:09.412 "superblock": true, 00:20:09.412 "num_base_bdevs": 4, 00:20:09.412 "num_base_bdevs_discovered": 2, 00:20:09.412 "num_base_bdevs_operational": 4, 00:20:09.412 "base_bdevs_list": [ 00:20:09.412 { 00:20:09.412 "name": "BaseBdev1", 00:20:09.412 "uuid": "0fa7e0e0-1f15-42ca-b373-50dc3a6e9157", 00:20:09.412 "is_configured": true, 00:20:09.412 "data_offset": 2048, 00:20:09.412 "data_size": 63488 00:20:09.412 }, 00:20:09.412 { 00:20:09.412 "name": "BaseBdev2", 00:20:09.412 "uuid": "ec9702fc-b099-456e-8d41-19d2287e44c2", 00:20:09.412 "is_configured": true, 00:20:09.412 "data_offset": 2048, 00:20:09.412 "data_size": 63488 00:20:09.412 }, 00:20:09.412 { 00:20:09.412 "name": "BaseBdev3", 00:20:09.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.412 "is_configured": false, 00:20:09.412 "data_offset": 0, 00:20:09.412 "data_size": 0 00:20:09.412 }, 00:20:09.412 { 00:20:09.412 "name": "BaseBdev4", 00:20:09.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.412 "is_configured": false, 00:20:09.412 "data_offset": 0, 00:20:09.412 "data_size": 0 00:20:09.412 } 00:20:09.412 ] 00:20:09.412 }' 00:20:09.412 22:28:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:09.412 22:28:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.979 22:28:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.238 [2024-07-14 22:28:09.243903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.238 BaseBdev3 00:20:10.238 22:28:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:10.238 22:28:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:10.238 22:28:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:10.238 22:28:09 -- common/autotest_common.sh@889 -- # local i 00:20:10.238 22:28:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:10.238 22:28:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:10.238 22:28:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.496 22:28:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:10.755 [ 00:20:10.755 { 00:20:10.755 "name": "BaseBdev3", 00:20:10.755 "aliases": [ 00:20:10.755 "f6376d67-5c9c-44fa-8cb8-8e8551b1233c" 00:20:10.755 ], 00:20:10.755 "product_name": "Malloc disk", 00:20:10.755 "block_size": 512, 00:20:10.755 "num_blocks": 65536, 00:20:10.755 "uuid": "f6376d67-5c9c-44fa-8cb8-8e8551b1233c", 00:20:10.755 "assigned_rate_limits": { 00:20:10.755 "rw_ios_per_sec": 0, 00:20:10.755 "rw_mbytes_per_sec": 0, 00:20:10.755 "r_mbytes_per_sec": 0, 00:20:10.755 "w_mbytes_per_sec": 0 00:20:10.755 }, 00:20:10.755 "claimed": true, 00:20:10.755 "claim_type": "exclusive_write", 00:20:10.755 "zoned": false, 00:20:10.755 "supported_io_types": { 00:20:10.755 "read": true, 00:20:10.755 "write": true, 00:20:10.755 "unmap": true, 00:20:10.755 "write_zeroes": true, 00:20:10.755 "flush": true, 00:20:10.755 "reset": true, 00:20:10.755 "compare": false, 00:20:10.755 "compare_and_write": false, 00:20:10.755 "abort": true, 00:20:10.755 "nvme_admin": false, 00:20:10.755 "nvme_io": false 00:20:10.755 }, 00:20:10.755 "memory_domains": [ 00:20:10.755 { 00:20:10.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.755 "dma_device_type": 2 00:20:10.755 } 00:20:10.755 ], 00:20:10.755 "driver_specific": {} 00:20:10.755 } 00:20:10.755 ] 00:20:10.755 22:28:09 -- common/autotest_common.sh@895 -- # return 0 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.755 22:28:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.756 22:28:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.756 22:28:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.756 22:28:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.014 22:28:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.014 "name": "Existed_Raid", 00:20:11.014 "uuid": "f5a72987-76e6-4e96-bea7-eba204f19ded", 00:20:11.014 "strip_size_kb": 0, 00:20:11.014 "state": "configuring", 00:20:11.014 "raid_level": "raid1", 00:20:11.014 "superblock": true, 00:20:11.014 "num_base_bdevs": 4, 00:20:11.014 "num_base_bdevs_discovered": 3, 00:20:11.014 "num_base_bdevs_operational": 4, 00:20:11.014 "base_bdevs_list": [ 00:20:11.014 { 00:20:11.014 "name": "BaseBdev1", 00:20:11.014 "uuid": "0fa7e0e0-1f15-42ca-b373-50dc3a6e9157", 00:20:11.014 "is_configured": true, 00:20:11.014 "data_offset": 2048, 00:20:11.014 "data_size": 63488 00:20:11.014 }, 00:20:11.014 { 00:20:11.014 "name": "BaseBdev2", 00:20:11.014 "uuid": "ec9702fc-b099-456e-8d41-19d2287e44c2", 00:20:11.014 "is_configured": true, 00:20:11.014 "data_offset": 2048, 00:20:11.014 "data_size": 63488 00:20:11.014 }, 00:20:11.014 { 00:20:11.014 "name": "BaseBdev3", 00:20:11.014 "uuid": "f6376d67-5c9c-44fa-8cb8-8e8551b1233c", 00:20:11.014 "is_configured": true, 00:20:11.014 "data_offset": 2048, 00:20:11.014 "data_size": 63488 00:20:11.014 }, 00:20:11.014 { 00:20:11.014 "name": "BaseBdev4", 00:20:11.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.014 "is_configured": false, 00:20:11.014 "data_offset": 0, 00:20:11.014 "data_size": 0 00:20:11.014 } 00:20:11.014 ] 00:20:11.014 }' 00:20:11.014 22:28:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.014 22:28:09 -- common/autotest_common.sh@10 -- # set +x 00:20:11.581 22:28:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.840 [2024-07-14 22:28:10.696091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.840 [2024-07-14 22:28:10.696346] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:11.840 [2024-07-14 22:28:10.696392] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:11.840 BaseBdev4 00:20:11.840 [2024-07-14 22:28:10.696547] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:11.840 [2024-07-14 22:28:10.696911] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:11.840 [2024-07-14 22:28:10.696937] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:20:11.840 [2024-07-14 22:28:10.697140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.840 22:28:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:11.840 22:28:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:20:11.840 22:28:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:11.840 22:28:10 -- common/autotest_common.sh@889 -- # local i 00:20:11.840 22:28:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:11.840 22:28:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:11.840 22:28:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:12.099 22:28:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:12.358 [ 00:20:12.358 { 00:20:12.358 "name": "BaseBdev4", 00:20:12.358 "aliases": [ 00:20:12.358 "a0a3eade-cf30-49f1-8f97-0146b07a4594" 00:20:12.358 ], 00:20:12.358 "product_name": "Malloc disk", 00:20:12.358 "block_size": 512, 00:20:12.358 "num_blocks": 65536, 00:20:12.358 "uuid": "a0a3eade-cf30-49f1-8f97-0146b07a4594", 00:20:12.358 "assigned_rate_limits": { 00:20:12.358 "rw_ios_per_sec": 0, 00:20:12.358 "rw_mbytes_per_sec": 0, 00:20:12.358 "r_mbytes_per_sec": 0, 00:20:12.358 "w_mbytes_per_sec": 0 00:20:12.358 }, 00:20:12.358 "claimed": true, 00:20:12.358 "claim_type": "exclusive_write", 00:20:12.358 "zoned": false, 00:20:12.358 "supported_io_types": { 00:20:12.358 "read": true, 00:20:12.358 "write": true, 00:20:12.358 "unmap": true, 00:20:12.358 "write_zeroes": true, 00:20:12.358 "flush": true, 00:20:12.358 "reset": true, 00:20:12.358 "compare": false, 00:20:12.358 "compare_and_write": false, 00:20:12.358 "abort": true, 00:20:12.358 "nvme_admin": false, 00:20:12.358 "nvme_io": false 00:20:12.358 }, 00:20:12.358 "memory_domains": [ 00:20:12.358 { 00:20:12.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.358 "dma_device_type": 2 00:20:12.358 } 00:20:12.358 ], 00:20:12.358 "driver_specific": {} 00:20:12.358 } 00:20:12.358 ] 00:20:12.358 22:28:11 -- common/autotest_common.sh@895 -- # return 0 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.358 22:28:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.616 22:28:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.616 "name": "Existed_Raid", 00:20:12.616 "uuid": "f5a72987-76e6-4e96-bea7-eba204f19ded", 00:20:12.616 "strip_size_kb": 0, 00:20:12.616 "state": "online", 00:20:12.616 "raid_level": "raid1", 00:20:12.616 "superblock": true, 00:20:12.616 "num_base_bdevs": 4, 00:20:12.616 "num_base_bdevs_discovered": 4, 00:20:12.616 "num_base_bdevs_operational": 4, 00:20:12.616 "base_bdevs_list": [ 00:20:12.617 { 00:20:12.617 "name": "BaseBdev1", 00:20:12.617 "uuid": "0fa7e0e0-1f15-42ca-b373-50dc3a6e9157", 00:20:12.617 "is_configured": true, 00:20:12.617 "data_offset": 2048, 00:20:12.617 "data_size": 63488 00:20:12.617 }, 00:20:12.617 { 00:20:12.617 "name": "BaseBdev2", 00:20:12.617 "uuid": "ec9702fc-b099-456e-8d41-19d2287e44c2", 00:20:12.617 "is_configured": true, 00:20:12.617 "data_offset": 2048, 00:20:12.617 "data_size": 63488 00:20:12.617 }, 00:20:12.617 { 00:20:12.617 "name": "BaseBdev3", 00:20:12.617 "uuid": "f6376d67-5c9c-44fa-8cb8-8e8551b1233c", 00:20:12.617 "is_configured": true, 00:20:12.617 "data_offset": 2048, 00:20:12.617 "data_size": 63488 00:20:12.617 }, 00:20:12.617 { 00:20:12.617 "name": "BaseBdev4", 00:20:12.617 "uuid": "a0a3eade-cf30-49f1-8f97-0146b07a4594", 00:20:12.617 "is_configured": true, 00:20:12.617 "data_offset": 2048, 00:20:12.617 "data_size": 63488 00:20:12.617 } 00:20:12.617 ] 00:20:12.617 }' 00:20:12.617 22:28:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.617 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.184 22:28:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:13.442 [2024-07-14 22:28:12.291630] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.442 22:28:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.701 22:28:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.701 "name": "Existed_Raid", 00:20:13.701 "uuid": "f5a72987-76e6-4e96-bea7-eba204f19ded", 00:20:13.701 "strip_size_kb": 0, 00:20:13.701 "state": "online", 00:20:13.701 "raid_level": "raid1", 00:20:13.701 "superblock": true, 00:20:13.701 "num_base_bdevs": 4, 00:20:13.701 "num_base_bdevs_discovered": 3, 00:20:13.701 "num_base_bdevs_operational": 3, 00:20:13.701 "base_bdevs_list": [ 00:20:13.701 { 00:20:13.701 "name": null, 00:20:13.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.701 "is_configured": false, 00:20:13.701 "data_offset": 2048, 00:20:13.701 "data_size": 63488 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "name": "BaseBdev2", 00:20:13.701 "uuid": "ec9702fc-b099-456e-8d41-19d2287e44c2", 00:20:13.701 "is_configured": true, 00:20:13.701 "data_offset": 2048, 00:20:13.701 "data_size": 63488 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "name": "BaseBdev3", 00:20:13.701 "uuid": "f6376d67-5c9c-44fa-8cb8-8e8551b1233c", 00:20:13.701 "is_configured": true, 00:20:13.701 "data_offset": 2048, 00:20:13.701 "data_size": 63488 00:20:13.701 }, 00:20:13.701 { 00:20:13.701 "name": "BaseBdev4", 00:20:13.701 "uuid": "a0a3eade-cf30-49f1-8f97-0146b07a4594", 00:20:13.701 "is_configured": true, 00:20:13.701 "data_offset": 2048, 00:20:13.701 "data_size": 63488 00:20:13.701 } 00:20:13.701 ] 00:20:13.701 }' 00:20:13.701 22:28:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.701 22:28:12 -- common/autotest_common.sh@10 -- # set +x 00:20:14.267 22:28:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:14.267 22:28:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:14.267 22:28:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.267 22:28:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:14.523 22:28:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:14.523 22:28:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:14.523 22:28:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:14.780 [2024-07-14 22:28:13.703010] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:14.780 22:28:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:14.780 22:28:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:14.780 22:28:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.780 22:28:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:15.038 22:28:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:15.038 22:28:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:15.038 22:28:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:15.296 [2024-07-14 22:28:14.218048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.296 22:28:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:15.296 22:28:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:15.296 22:28:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.296 22:28:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:15.554 22:28:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:15.554 22:28:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:15.554 22:28:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:15.812 [2024-07-14 22:28:14.653052] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:15.812 [2024-07-14 22:28:14.653084] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.812 [2024-07-14 22:28:14.653142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.812 [2024-07-14 22:28:14.716737] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.812 [2024-07-14 22:28:14.716774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:15.812 22:28:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:15.812 22:28:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:15.812 22:28:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:15.812 22:28:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.069 22:28:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:16.070 22:28:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:16.070 22:28:14 -- bdev/bdev_raid.sh@287 -- # killprocess 124355 00:20:16.070 22:28:14 -- common/autotest_common.sh@926 -- # '[' -z 124355 ']' 00:20:16.070 22:28:14 -- common/autotest_common.sh@930 -- # kill -0 124355 00:20:16.070 22:28:14 -- common/autotest_common.sh@931 -- # uname 00:20:16.070 22:28:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:16.070 22:28:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124355 00:20:16.070 killing process with pid 124355 00:20:16.070 22:28:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:16.070 22:28:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:16.070 22:28:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124355' 00:20:16.070 22:28:14 -- common/autotest_common.sh@945 -- # kill 124355 00:20:16.070 22:28:14 -- common/autotest_common.sh@950 -- # wait 124355 00:20:16.070 [2024-07-14 22:28:14.990837] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:16.070 [2024-07-14 22:28:14.990981] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:17.004 ************************************ 00:20:17.004 END TEST raid_state_function_test_sb 00:20:17.004 ************************************ 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:17.004 00:20:17.004 real 0m14.482s 00:20:17.004 user 0m26.082s 00:20:17.004 sys 0m1.638s 00:20:17.004 22:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.004 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:17.004 22:28:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:17.004 22:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:17.004 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.004 ************************************ 00:20:17.004 START TEST raid_superblock_test 00:20:17.004 ************************************ 00:20:17.004 22:28:15 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=124816 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124816 /var/tmp/spdk-raid.sock 00:20:17.004 22:28:15 -- common/autotest_common.sh@819 -- # '[' -z 124816 ']' 00:20:17.004 22:28:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:17.004 22:28:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:17.004 22:28:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:17.004 22:28:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.004 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.004 22:28:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:17.004 [2024-07-14 22:28:16.028853] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:17.004 [2024-07-14 22:28:16.029316] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124816 ] 00:20:17.262 [2024-07-14 22:28:16.198166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.519 [2024-07-14 22:28:16.395495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.519 [2024-07-14 22:28:16.562577] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:18.085 22:28:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.085 22:28:16 -- common/autotest_common.sh@852 -- # return 0 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:18.085 22:28:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:18.343 malloc1 00:20:18.343 22:28:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:18.343 [2024-07-14 22:28:17.403150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:18.343 [2024-07-14 22:28:17.403243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.343 [2024-07-14 22:28:17.403272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:18.343 [2024-07-14 22:28:17.403330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.343 [2024-07-14 22:28:17.405356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.343 [2024-07-14 22:28:17.405402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:18.343 pt1 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:18.602 22:28:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:18.602 malloc2 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:18.860 [2024-07-14 22:28:17.863297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:18.860 [2024-07-14 22:28:17.863379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.860 [2024-07-14 22:28:17.863418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:18.860 [2024-07-14 22:28:17.863480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.860 [2024-07-14 22:28:17.865712] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.860 [2024-07-14 22:28:17.865761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:18.860 pt2 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:18.860 22:28:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:19.119 malloc3 00:20:19.119 22:28:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:19.385 [2024-07-14 22:28:18.337252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:19.385 [2024-07-14 22:28:18.337337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.385 [2024-07-14 22:28:18.337373] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:19.385 [2024-07-14 22:28:18.337426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.385 [2024-07-14 22:28:18.339627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.385 [2024-07-14 22:28:18.339681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:19.385 pt3 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:19.385 22:28:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:19.662 malloc4 00:20:19.662 22:28:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:19.921 [2024-07-14 22:28:18.797808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:19.921 [2024-07-14 22:28:18.797899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.921 [2024-07-14 22:28:18.797954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:19.921 [2024-07-14 22:28:18.798011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.921 [2024-07-14 22:28:18.799951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.921 [2024-07-14 22:28:18.799998] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:19.921 pt4 00:20:19.921 22:28:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:19.921 22:28:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:19.921 22:28:18 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:19.921 [2024-07-14 22:28:18.985935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:20.179 [2024-07-14 22:28:18.987869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:20.179 [2024-07-14 22:28:18.987954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:20.179 [2024-07-14 22:28:18.988017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:20.179 [2024-07-14 22:28:18.988242] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:20.179 [2024-07-14 22:28:18.988257] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:20.179 [2024-07-14 22:28:18.988421] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:20.179 [2024-07-14 22:28:18.988812] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:20.179 [2024-07-14 22:28:18.988827] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:20.179 [2024-07-14 22:28:18.989007] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.179 22:28:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.179 22:28:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.179 "name": "raid_bdev1", 00:20:20.179 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:20.179 "strip_size_kb": 0, 00:20:20.179 "state": "online", 00:20:20.179 "raid_level": "raid1", 00:20:20.179 "superblock": true, 00:20:20.179 "num_base_bdevs": 4, 00:20:20.179 "num_base_bdevs_discovered": 4, 00:20:20.179 "num_base_bdevs_operational": 4, 00:20:20.179 "base_bdevs_list": [ 00:20:20.179 { 00:20:20.179 "name": "pt1", 00:20:20.179 "uuid": "0ff34392-c2ca-526b-91fe-dfba8e87fbd1", 00:20:20.179 "is_configured": true, 00:20:20.179 "data_offset": 2048, 00:20:20.179 "data_size": 63488 00:20:20.179 }, 00:20:20.179 { 00:20:20.179 "name": "pt2", 00:20:20.179 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:20.179 "is_configured": true, 00:20:20.179 "data_offset": 2048, 00:20:20.179 "data_size": 63488 00:20:20.179 }, 00:20:20.179 { 00:20:20.179 "name": "pt3", 00:20:20.179 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:20.179 "is_configured": true, 00:20:20.179 "data_offset": 2048, 00:20:20.179 "data_size": 63488 00:20:20.179 }, 00:20:20.179 { 00:20:20.179 "name": "pt4", 00:20:20.179 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:20.179 "is_configured": true, 00:20:20.179 "data_offset": 2048, 00:20:20.179 "data_size": 63488 00:20:20.179 } 00:20:20.179 ] 00:20:20.179 }' 00:20:20.179 22:28:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.179 22:28:19 -- common/autotest_common.sh@10 -- # set +x 00:20:21.112 22:28:19 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:21.112 22:28:19 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:21.112 [2024-07-14 22:28:20.150282] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.112 22:28:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=52fe5356-b174-4bff-aaf4-61555f6f2cb5 00:20:21.112 22:28:20 -- bdev/bdev_raid.sh@380 -- # '[' -z 52fe5356-b174-4bff-aaf4-61555f6f2cb5 ']' 00:20:21.112 22:28:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:21.370 [2024-07-14 22:28:20.394125] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.370 [2024-07-14 22:28:20.394149] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.370 [2024-07-14 22:28:20.394225] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.370 [2024-07-14 22:28:20.394307] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.370 [2024-07-14 22:28:20.394318] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:21.370 22:28:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.370 22:28:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:21.627 22:28:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:21.627 22:28:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:21.627 22:28:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:21.627 22:28:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:21.883 22:28:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:21.883 22:28:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:22.141 22:28:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.141 22:28:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:22.399 22:28:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.399 22:28:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:22.657 22:28:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:22.657 22:28:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:22.914 22:28:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:22.914 22:28:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:22.914 22:28:21 -- common/autotest_common.sh@640 -- # local es=0 00:20:22.914 22:28:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:22.914 22:28:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.914 22:28:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.914 22:28:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.914 22:28:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.914 22:28:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.914 22:28:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.914 22:28:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.914 22:28:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:22.914 22:28:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:22.914 [2024-07-14 22:28:21.922356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:22.914 [2024-07-14 22:28:21.923834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:22.914 [2024-07-14 22:28:21.923889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:22.914 [2024-07-14 22:28:21.923928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:22.914 [2024-07-14 22:28:21.923975] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:22.914 [2024-07-14 22:28:21.924038] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:22.914 [2024-07-14 22:28:21.924070] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:22.914 [2024-07-14 22:28:21.924137] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:22.914 [2024-07-14 22:28:21.924161] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.914 [2024-07-14 22:28:21.924170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:20:22.914 request: 00:20:22.914 { 00:20:22.914 "name": "raid_bdev1", 00:20:22.914 "raid_level": "raid1", 00:20:22.914 "base_bdevs": [ 00:20:22.914 "malloc1", 00:20:22.914 "malloc2", 00:20:22.914 "malloc3", 00:20:22.914 "malloc4" 00:20:22.914 ], 00:20:22.914 "superblock": false, 00:20:22.914 "method": "bdev_raid_create", 00:20:22.914 "req_id": 1 00:20:22.914 } 00:20:22.914 Got JSON-RPC error response 00:20:22.914 response: 00:20:22.914 { 00:20:22.914 "code": -17, 00:20:22.914 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:22.914 } 00:20:22.914 22:28:21 -- common/autotest_common.sh@643 -- # es=1 00:20:22.914 22:28:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:22.914 22:28:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:22.914 22:28:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:22.914 22:28:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.914 22:28:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:23.172 22:28:22 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:23.172 22:28:22 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:23.172 22:28:22 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:23.431 [2024-07-14 22:28:22.410430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:23.431 [2024-07-14 22:28:22.410494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.431 [2024-07-14 22:28:22.410522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:23.431 [2024-07-14 22:28:22.410546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.431 [2024-07-14 22:28:22.412415] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.431 [2024-07-14 22:28:22.412474] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:23.431 [2024-07-14 22:28:22.412561] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:23.431 [2024-07-14 22:28:22.412613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:23.431 pt1 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.431 22:28:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.689 22:28:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.689 "name": "raid_bdev1", 00:20:23.689 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:23.689 "strip_size_kb": 0, 00:20:23.689 "state": "configuring", 00:20:23.689 "raid_level": "raid1", 00:20:23.689 "superblock": true, 00:20:23.689 "num_base_bdevs": 4, 00:20:23.689 "num_base_bdevs_discovered": 1, 00:20:23.689 "num_base_bdevs_operational": 4, 00:20:23.689 "base_bdevs_list": [ 00:20:23.689 { 00:20:23.689 "name": "pt1", 00:20:23.689 "uuid": "0ff34392-c2ca-526b-91fe-dfba8e87fbd1", 00:20:23.689 "is_configured": true, 00:20:23.689 "data_offset": 2048, 00:20:23.689 "data_size": 63488 00:20:23.689 }, 00:20:23.689 { 00:20:23.689 "name": null, 00:20:23.689 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:23.689 "is_configured": false, 00:20:23.689 "data_offset": 2048, 00:20:23.689 "data_size": 63488 00:20:23.689 }, 00:20:23.689 { 00:20:23.689 "name": null, 00:20:23.689 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:23.689 "is_configured": false, 00:20:23.689 "data_offset": 2048, 00:20:23.689 "data_size": 63488 00:20:23.689 }, 00:20:23.689 { 00:20:23.689 "name": null, 00:20:23.689 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:23.689 "is_configured": false, 00:20:23.689 "data_offset": 2048, 00:20:23.689 "data_size": 63488 00:20:23.689 } 00:20:23.689 ] 00:20:23.689 }' 00:20:23.689 22:28:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.689 22:28:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.255 22:28:23 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:24.255 22:28:23 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.512 [2024-07-14 22:28:23.510682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.512 [2024-07-14 22:28:23.510784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.512 [2024-07-14 22:28:23.510824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:24.512 [2024-07-14 22:28:23.510844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.512 [2024-07-14 22:28:23.511350] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.512 [2024-07-14 22:28:23.511409] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.512 [2024-07-14 22:28:23.511538] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:24.512 [2024-07-14 22:28:23.511572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:24.512 pt2 00:20:24.512 22:28:23 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:24.769 [2024-07-14 22:28:23.750711] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.769 22:28:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.026 22:28:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.026 "name": "raid_bdev1", 00:20:25.026 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:25.026 "strip_size_kb": 0, 00:20:25.026 "state": "configuring", 00:20:25.026 "raid_level": "raid1", 00:20:25.026 "superblock": true, 00:20:25.026 "num_base_bdevs": 4, 00:20:25.026 "num_base_bdevs_discovered": 1, 00:20:25.026 "num_base_bdevs_operational": 4, 00:20:25.026 "base_bdevs_list": [ 00:20:25.026 { 00:20:25.026 "name": "pt1", 00:20:25.026 "uuid": "0ff34392-c2ca-526b-91fe-dfba8e87fbd1", 00:20:25.026 "is_configured": true, 00:20:25.026 "data_offset": 2048, 00:20:25.026 "data_size": 63488 00:20:25.026 }, 00:20:25.026 { 00:20:25.026 "name": null, 00:20:25.026 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:25.026 "is_configured": false, 00:20:25.026 "data_offset": 2048, 00:20:25.026 "data_size": 63488 00:20:25.026 }, 00:20:25.026 { 00:20:25.026 "name": null, 00:20:25.026 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:25.026 "is_configured": false, 00:20:25.026 "data_offset": 2048, 00:20:25.026 "data_size": 63488 00:20:25.026 }, 00:20:25.026 { 00:20:25.026 "name": null, 00:20:25.026 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:25.026 "is_configured": false, 00:20:25.026 "data_offset": 2048, 00:20:25.026 "data_size": 63488 00:20:25.026 } 00:20:25.026 ] 00:20:25.026 }' 00:20:25.026 22:28:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.026 22:28:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.959 22:28:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:25.959 22:28:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:25.959 22:28:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.959 [2024-07-14 22:28:24.906994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.959 [2024-07-14 22:28:24.907061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.959 [2024-07-14 22:28:24.907094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:25.959 [2024-07-14 22:28:24.907111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.959 [2024-07-14 22:28:24.907497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.959 [2024-07-14 22:28:24.907547] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.959 [2024-07-14 22:28:24.907626] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:25.959 [2024-07-14 22:28:24.907649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.959 pt2 00:20:25.959 22:28:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:25.959 22:28:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:25.960 22:28:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:26.218 [2024-07-14 22:28:25.143026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:26.218 [2024-07-14 22:28:25.143079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.218 [2024-07-14 22:28:25.143106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:26.218 [2024-07-14 22:28:25.143128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.218 [2024-07-14 22:28:25.143440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.218 [2024-07-14 22:28:25.143489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:26.218 [2024-07-14 22:28:25.143560] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:26.218 [2024-07-14 22:28:25.143581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:26.218 pt3 00:20:26.218 22:28:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:26.218 22:28:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:26.218 22:28:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:26.476 [2024-07-14 22:28:25.331081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:26.476 [2024-07-14 22:28:25.331137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.476 [2024-07-14 22:28:25.331162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:26.476 [2024-07-14 22:28:25.331183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.476 [2024-07-14 22:28:25.331507] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.476 [2024-07-14 22:28:25.331554] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:26.476 [2024-07-14 22:28:25.331629] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:26.476 [2024-07-14 22:28:25.331652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:26.476 [2024-07-14 22:28:25.331784] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:26.476 [2024-07-14 22:28:25.331797] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:26.476 [2024-07-14 22:28:25.331887] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:26.476 [2024-07-14 22:28:25.332174] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:26.476 [2024-07-14 22:28:25.332193] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:26.477 [2024-07-14 22:28:25.332305] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.477 pt4 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.477 "name": "raid_bdev1", 00:20:26.477 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:26.477 "strip_size_kb": 0, 00:20:26.477 "state": "online", 00:20:26.477 "raid_level": "raid1", 00:20:26.477 "superblock": true, 00:20:26.477 "num_base_bdevs": 4, 00:20:26.477 "num_base_bdevs_discovered": 4, 00:20:26.477 "num_base_bdevs_operational": 4, 00:20:26.477 "base_bdevs_list": [ 00:20:26.477 { 00:20:26.477 "name": "pt1", 00:20:26.477 "uuid": "0ff34392-c2ca-526b-91fe-dfba8e87fbd1", 00:20:26.477 "is_configured": true, 00:20:26.477 "data_offset": 2048, 00:20:26.477 "data_size": 63488 00:20:26.477 }, 00:20:26.477 { 00:20:26.477 "name": "pt2", 00:20:26.477 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:26.477 "is_configured": true, 00:20:26.477 "data_offset": 2048, 00:20:26.477 "data_size": 63488 00:20:26.477 }, 00:20:26.477 { 00:20:26.477 "name": "pt3", 00:20:26.477 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:26.477 "is_configured": true, 00:20:26.477 "data_offset": 2048, 00:20:26.477 "data_size": 63488 00:20:26.477 }, 00:20:26.477 { 00:20:26.477 "name": "pt4", 00:20:26.477 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:26.477 "is_configured": true, 00:20:26.477 "data_offset": 2048, 00:20:26.477 "data_size": 63488 00:20:26.477 } 00:20:26.477 ] 00:20:26.477 }' 00:20:26.477 22:28:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.477 22:28:25 -- common/autotest_common.sh@10 -- # set +x 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:27.412 [2024-07-14 22:28:26.444104] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@430 -- # '[' 52fe5356-b174-4bff-aaf4-61555f6f2cb5 '!=' 52fe5356-b174-4bff-aaf4-61555f6f2cb5 ']' 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:27.412 22:28:26 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:27.671 [2024-07-14 22:28:26.627966] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.671 22:28:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.930 22:28:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.930 "name": "raid_bdev1", 00:20:27.930 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:27.930 "strip_size_kb": 0, 00:20:27.930 "state": "online", 00:20:27.930 "raid_level": "raid1", 00:20:27.930 "superblock": true, 00:20:27.930 "num_base_bdevs": 4, 00:20:27.930 "num_base_bdevs_discovered": 3, 00:20:27.930 "num_base_bdevs_operational": 3, 00:20:27.930 "base_bdevs_list": [ 00:20:27.930 { 00:20:27.930 "name": null, 00:20:27.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.930 "is_configured": false, 00:20:27.930 "data_offset": 2048, 00:20:27.930 "data_size": 63488 00:20:27.930 }, 00:20:27.930 { 00:20:27.930 "name": "pt2", 00:20:27.930 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:27.930 "is_configured": true, 00:20:27.930 "data_offset": 2048, 00:20:27.930 "data_size": 63488 00:20:27.930 }, 00:20:27.930 { 00:20:27.930 "name": "pt3", 00:20:27.930 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:27.930 "is_configured": true, 00:20:27.930 "data_offset": 2048, 00:20:27.930 "data_size": 63488 00:20:27.930 }, 00:20:27.930 { 00:20:27.930 "name": "pt4", 00:20:27.930 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:27.930 "is_configured": true, 00:20:27.930 "data_offset": 2048, 00:20:27.930 "data_size": 63488 00:20:27.930 } 00:20:27.930 ] 00:20:27.930 }' 00:20:27.930 22:28:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.930 22:28:26 -- common/autotest_common.sh@10 -- # set +x 00:20:28.497 22:28:27 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:28.755 [2024-07-14 22:28:27.712216] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.755 [2024-07-14 22:28:27.712276] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.755 [2024-07-14 22:28:27.712376] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.755 [2024-07-14 22:28:27.712470] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.755 [2024-07-14 22:28:27.712483] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:28.755 22:28:27 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.755 22:28:27 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:29.014 22:28:27 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:29.014 22:28:27 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:29.014 22:28:27 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:29.014 22:28:27 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:29.014 22:28:27 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:29.274 22:28:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:29.274 22:28:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:29.274 22:28:28 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:29.533 22:28:28 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:29.792 [2024-07-14 22:28:28.720504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:29.792 [2024-07-14 22:28:28.720618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.792 [2024-07-14 22:28:28.720659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:29.792 [2024-07-14 22:28:28.720689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.792 [2024-07-14 22:28:28.723410] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.792 [2024-07-14 22:28:28.723512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:29.792 [2024-07-14 22:28:28.723635] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:29.792 [2024-07-14 22:28:28.723692] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.792 pt2 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.792 22:28:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.051 22:28:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.051 "name": "raid_bdev1", 00:20:30.051 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:30.051 "strip_size_kb": 0, 00:20:30.051 "state": "configuring", 00:20:30.051 "raid_level": "raid1", 00:20:30.051 "superblock": true, 00:20:30.051 "num_base_bdevs": 4, 00:20:30.051 "num_base_bdevs_discovered": 1, 00:20:30.051 "num_base_bdevs_operational": 3, 00:20:30.051 "base_bdevs_list": [ 00:20:30.051 { 00:20:30.051 "name": null, 00:20:30.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.051 "is_configured": false, 00:20:30.051 "data_offset": 2048, 00:20:30.051 "data_size": 63488 00:20:30.051 }, 00:20:30.051 { 00:20:30.051 "name": "pt2", 00:20:30.051 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:30.051 "is_configured": true, 00:20:30.051 "data_offset": 2048, 00:20:30.051 "data_size": 63488 00:20:30.051 }, 00:20:30.051 { 00:20:30.051 "name": null, 00:20:30.051 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:30.051 "is_configured": false, 00:20:30.051 "data_offset": 2048, 00:20:30.051 "data_size": 63488 00:20:30.051 }, 00:20:30.051 { 00:20:30.052 "name": null, 00:20:30.052 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:30.052 "is_configured": false, 00:20:30.052 "data_offset": 2048, 00:20:30.052 "data_size": 63488 00:20:30.052 } 00:20:30.052 ] 00:20:30.052 }' 00:20:30.052 22:28:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.052 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.619 22:28:29 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:30.619 22:28:29 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:30.619 22:28:29 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:30.878 [2024-07-14 22:28:29.878158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:30.878 [2024-07-14 22:28:29.878294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.878 [2024-07-14 22:28:29.878348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:30.878 [2024-07-14 22:28:29.878383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.878 [2024-07-14 22:28:29.879294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.878 [2024-07-14 22:28:29.879381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:30.878 [2024-07-14 22:28:29.879521] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:30.878 [2024-07-14 22:28:29.879857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:30.878 pt3 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.878 22:28:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.137 22:28:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.137 "name": "raid_bdev1", 00:20:31.137 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:31.137 "strip_size_kb": 0, 00:20:31.137 "state": "configuring", 00:20:31.137 "raid_level": "raid1", 00:20:31.137 "superblock": true, 00:20:31.137 "num_base_bdevs": 4, 00:20:31.137 "num_base_bdevs_discovered": 2, 00:20:31.137 "num_base_bdevs_operational": 3, 00:20:31.137 "base_bdevs_list": [ 00:20:31.137 { 00:20:31.137 "name": null, 00:20:31.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.137 "is_configured": false, 00:20:31.137 "data_offset": 2048, 00:20:31.137 "data_size": 63488 00:20:31.137 }, 00:20:31.137 { 00:20:31.137 "name": "pt2", 00:20:31.137 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:31.137 "is_configured": true, 00:20:31.137 "data_offset": 2048, 00:20:31.137 "data_size": 63488 00:20:31.137 }, 00:20:31.137 { 00:20:31.137 "name": "pt3", 00:20:31.137 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:31.137 "is_configured": true, 00:20:31.137 "data_offset": 2048, 00:20:31.137 "data_size": 63488 00:20:31.137 }, 00:20:31.137 { 00:20:31.137 "name": null, 00:20:31.137 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:31.137 "is_configured": false, 00:20:31.137 "data_offset": 2048, 00:20:31.137 "data_size": 63488 00:20:31.137 } 00:20:31.137 ] 00:20:31.137 }' 00:20:31.137 22:28:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.137 22:28:30 -- common/autotest_common.sh@10 -- # set +x 00:20:31.704 22:28:30 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:31.704 22:28:30 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:31.704 22:28:30 -- bdev/bdev_raid.sh@462 -- # i=3 00:20:31.704 22:28:30 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:31.963 [2024-07-14 22:28:30.978463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:31.963 [2024-07-14 22:28:30.978573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.963 [2024-07-14 22:28:30.978621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:31.963 [2024-07-14 22:28:30.978645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.963 [2024-07-14 22:28:30.979615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.963 [2024-07-14 22:28:30.979706] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:31.963 [2024-07-14 22:28:30.979856] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:31.963 [2024-07-14 22:28:30.979888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:31.963 [2024-07-14 22:28:30.980444] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:20:31.963 [2024-07-14 22:28:30.980466] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:31.963 [2024-07-14 22:28:30.980638] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:31.963 [2024-07-14 22:28:30.981403] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:20:31.963 [2024-07-14 22:28:30.981428] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:20:31.963 [2024-07-14 22:28:30.981657] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.963 pt4 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.963 22:28:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.222 22:28:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.222 "name": "raid_bdev1", 00:20:32.222 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:32.222 "strip_size_kb": 0, 00:20:32.222 "state": "online", 00:20:32.222 "raid_level": "raid1", 00:20:32.222 "superblock": true, 00:20:32.222 "num_base_bdevs": 4, 00:20:32.222 "num_base_bdevs_discovered": 3, 00:20:32.222 "num_base_bdevs_operational": 3, 00:20:32.222 "base_bdevs_list": [ 00:20:32.222 { 00:20:32.222 "name": null, 00:20:32.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.222 "is_configured": false, 00:20:32.222 "data_offset": 2048, 00:20:32.222 "data_size": 63488 00:20:32.222 }, 00:20:32.222 { 00:20:32.222 "name": "pt2", 00:20:32.222 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:32.222 "is_configured": true, 00:20:32.222 "data_offset": 2048, 00:20:32.222 "data_size": 63488 00:20:32.222 }, 00:20:32.222 { 00:20:32.222 "name": "pt3", 00:20:32.222 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:32.222 "is_configured": true, 00:20:32.222 "data_offset": 2048, 00:20:32.222 "data_size": 63488 00:20:32.222 }, 00:20:32.222 { 00:20:32.222 "name": "pt4", 00:20:32.222 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:32.222 "is_configured": true, 00:20:32.222 "data_offset": 2048, 00:20:32.222 "data_size": 63488 00:20:32.222 } 00:20:32.222 ] 00:20:32.222 }' 00:20:32.222 22:28:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.222 22:28:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.158 22:28:31 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:20:33.158 22:28:31 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:33.158 [2024-07-14 22:28:32.087231] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.158 [2024-07-14 22:28:32.087289] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.158 [2024-07-14 22:28:32.087389] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.158 [2024-07-14 22:28:32.087477] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.158 [2024-07-14 22:28:32.087505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:20:33.158 22:28:32 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.158 22:28:32 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:33.417 22:28:32 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:33.418 22:28:32 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:33.418 22:28:32 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:33.676 [2024-07-14 22:28:32.515293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:33.676 [2024-07-14 22:28:32.515395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.676 [2024-07-14 22:28:32.515441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:33.676 [2024-07-14 22:28:32.515467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.676 [2024-07-14 22:28:32.518197] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.676 [2024-07-14 22:28:32.518285] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:33.676 [2024-07-14 22:28:32.518397] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:33.676 [2024-07-14 22:28:32.518485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:33.676 pt1 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.676 22:28:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.934 22:28:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.934 "name": "raid_bdev1", 00:20:33.934 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:33.934 "strip_size_kb": 0, 00:20:33.934 "state": "configuring", 00:20:33.934 "raid_level": "raid1", 00:20:33.934 "superblock": true, 00:20:33.934 "num_base_bdevs": 4, 00:20:33.934 "num_base_bdevs_discovered": 1, 00:20:33.934 "num_base_bdevs_operational": 4, 00:20:33.934 "base_bdevs_list": [ 00:20:33.934 { 00:20:33.934 "name": "pt1", 00:20:33.934 "uuid": "0ff34392-c2ca-526b-91fe-dfba8e87fbd1", 00:20:33.934 "is_configured": true, 00:20:33.934 "data_offset": 2048, 00:20:33.934 "data_size": 63488 00:20:33.934 }, 00:20:33.934 { 00:20:33.934 "name": null, 00:20:33.934 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:33.934 "is_configured": false, 00:20:33.934 "data_offset": 2048, 00:20:33.934 "data_size": 63488 00:20:33.934 }, 00:20:33.934 { 00:20:33.934 "name": null, 00:20:33.934 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:33.934 "is_configured": false, 00:20:33.934 "data_offset": 2048, 00:20:33.934 "data_size": 63488 00:20:33.934 }, 00:20:33.934 { 00:20:33.934 "name": null, 00:20:33.934 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:33.934 "is_configured": false, 00:20:33.934 "data_offset": 2048, 00:20:33.934 "data_size": 63488 00:20:33.934 } 00:20:33.934 ] 00:20:33.934 }' 00:20:33.934 22:28:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.934 22:28:32 -- common/autotest_common.sh@10 -- # set +x 00:20:34.505 22:28:33 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:34.505 22:28:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:34.505 22:28:33 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:34.777 22:28:33 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:34.777 22:28:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:34.777 22:28:33 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:35.042 22:28:33 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:35.042 22:28:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:35.042 22:28:33 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:35.042 22:28:34 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:35.042 22:28:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:35.042 22:28:34 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:35.042 22:28:34 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:35.299 [2024-07-14 22:28:34.215745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:35.299 [2024-07-14 22:28:34.215898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.299 [2024-07-14 22:28:34.215951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:20:35.299 [2024-07-14 22:28:34.216293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.299 [2024-07-14 22:28:34.217105] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.299 [2024-07-14 22:28:34.217196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:35.299 [2024-07-14 22:28:34.217596] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:35.299 [2024-07-14 22:28:34.217639] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:35.299 [2024-07-14 22:28:34.217649] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.299 [2024-07-14 22:28:34.217686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:20:35.299 [2024-07-14 22:28:34.218022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:35.299 pt4 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.299 22:28:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.557 22:28:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.557 "name": "raid_bdev1", 00:20:35.557 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:35.557 "strip_size_kb": 0, 00:20:35.557 "state": "configuring", 00:20:35.557 "raid_level": "raid1", 00:20:35.557 "superblock": true, 00:20:35.557 "num_base_bdevs": 4, 00:20:35.557 "num_base_bdevs_discovered": 1, 00:20:35.557 "num_base_bdevs_operational": 3, 00:20:35.557 "base_bdevs_list": [ 00:20:35.557 { 00:20:35.557 "name": null, 00:20:35.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.557 "is_configured": false, 00:20:35.557 "data_offset": 2048, 00:20:35.557 "data_size": 63488 00:20:35.557 }, 00:20:35.557 { 00:20:35.557 "name": null, 00:20:35.557 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:35.557 "is_configured": false, 00:20:35.557 "data_offset": 2048, 00:20:35.557 "data_size": 63488 00:20:35.557 }, 00:20:35.557 { 00:20:35.557 "name": null, 00:20:35.557 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:35.557 "is_configured": false, 00:20:35.557 "data_offset": 2048, 00:20:35.557 "data_size": 63488 00:20:35.557 }, 00:20:35.557 { 00:20:35.557 "name": "pt4", 00:20:35.557 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:35.557 "is_configured": true, 00:20:35.557 "data_offset": 2048, 00:20:35.557 "data_size": 63488 00:20:35.557 } 00:20:35.557 ] 00:20:35.557 }' 00:20:35.557 22:28:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.557 22:28:34 -- common/autotest_common.sh@10 -- # set +x 00:20:36.122 22:28:35 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:36.122 22:28:35 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:36.122 22:28:35 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:36.381 [2024-07-14 22:28:35.216498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:36.381 [2024-07-14 22:28:35.216646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.381 [2024-07-14 22:28:35.216694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:36.381 [2024-07-14 22:28:35.216988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.381 [2024-07-14 22:28:35.217785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.381 [2024-07-14 22:28:35.217858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:36.381 [2024-07-14 22:28:35.218226] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:36.381 [2024-07-14 22:28:35.218266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:36.381 pt2 00:20:36.381 22:28:35 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:36.381 22:28:35 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:36.381 22:28:35 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:36.639 [2024-07-14 22:28:35.472625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:36.639 [2024-07-14 22:28:35.472756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.639 [2024-07-14 22:28:35.472800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:20:36.639 [2024-07-14 22:28:35.472830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.639 [2024-07-14 22:28:35.473779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.639 [2024-07-14 22:28:35.473884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:36.639 [2024-07-14 22:28:35.474014] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:36.639 [2024-07-14 22:28:35.474061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:36.639 [2024-07-14 22:28:35.474634] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:20:36.639 [2024-07-14 22:28:35.474657] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:36.639 [2024-07-14 22:28:35.474899] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:20:36.639 [2024-07-14 22:28:35.475418] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:20:36.639 [2024-07-14 22:28:35.475441] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:20:36.639 [2024-07-14 22:28:35.475627] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.639 pt3 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.639 "name": "raid_bdev1", 00:20:36.639 "uuid": "52fe5356-b174-4bff-aaf4-61555f6f2cb5", 00:20:36.639 "strip_size_kb": 0, 00:20:36.639 "state": "online", 00:20:36.639 "raid_level": "raid1", 00:20:36.639 "superblock": true, 00:20:36.639 "num_base_bdevs": 4, 00:20:36.639 "num_base_bdevs_discovered": 3, 00:20:36.639 "num_base_bdevs_operational": 3, 00:20:36.639 "base_bdevs_list": [ 00:20:36.639 { 00:20:36.639 "name": null, 00:20:36.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.639 "is_configured": false, 00:20:36.639 "data_offset": 2048, 00:20:36.639 "data_size": 63488 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "name": "pt2", 00:20:36.639 "uuid": "d2a4c540-b79f-58cf-be76-f7e031f18bd0", 00:20:36.639 "is_configured": true, 00:20:36.639 "data_offset": 2048, 00:20:36.639 "data_size": 63488 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "name": "pt3", 00:20:36.639 "uuid": "7a534d21-15e0-5457-9793-6b8555cce840", 00:20:36.639 "is_configured": true, 00:20:36.639 "data_offset": 2048, 00:20:36.639 "data_size": 63488 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "name": "pt4", 00:20:36.639 "uuid": "3611bf7f-454c-508c-b5f0-76c7206014fa", 00:20:36.639 "is_configured": true, 00:20:36.639 "data_offset": 2048, 00:20:36.639 "data_size": 63488 00:20:36.639 } 00:20:36.639 ] 00:20:36.639 }' 00:20:36.639 22:28:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.639 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:20:37.573 22:28:36 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:37.573 22:28:36 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:37.573 [2024-07-14 22:28:36.545186] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.573 22:28:36 -- bdev/bdev_raid.sh@506 -- # '[' 52fe5356-b174-4bff-aaf4-61555f6f2cb5 '!=' 52fe5356-b174-4bff-aaf4-61555f6f2cb5 ']' 00:20:37.573 22:28:36 -- bdev/bdev_raid.sh@511 -- # killprocess 124816 00:20:37.573 22:28:36 -- common/autotest_common.sh@926 -- # '[' -z 124816 ']' 00:20:37.573 22:28:36 -- common/autotest_common.sh@930 -- # kill -0 124816 00:20:37.573 22:28:36 -- common/autotest_common.sh@931 -- # uname 00:20:37.573 22:28:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.573 22:28:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124816 00:20:37.573 killing process with pid 124816 00:20:37.573 22:28:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:37.573 22:28:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:37.573 22:28:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124816' 00:20:37.573 22:28:36 -- common/autotest_common.sh@945 -- # kill 124816 00:20:37.573 22:28:36 -- common/autotest_common.sh@950 -- # wait 124816 00:20:37.573 [2024-07-14 22:28:36.577942] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.573 [2024-07-14 22:28:36.578100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.573 [2024-07-14 22:28:36.578222] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.573 [2024-07-14 22:28:36.578248] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:20:38.140 [2024-07-14 22:28:36.919408] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.071 ************************************ 00:20:39.071 END TEST raid_superblock_test 00:20:39.071 ************************************ 00:20:39.071 22:28:38 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:39.071 00:20:39.071 real 0m22.163s 00:20:39.071 user 0m40.689s 00:20:39.071 sys 0m2.541s 00:20:39.071 22:28:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.071 22:28:38 -- common/autotest_common.sh@10 -- # set +x 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:39.328 22:28:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:39.328 22:28:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:39.328 22:28:38 -- common/autotest_common.sh@10 -- # set +x 00:20:39.328 ************************************ 00:20:39.328 START TEST raid_rebuild_test 00:20:39.328 ************************************ 00:20:39.328 22:28:38 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@544 -- # raid_pid=125526 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125526 /var/tmp/spdk-raid.sock 00:20:39.328 22:28:38 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:39.328 22:28:38 -- common/autotest_common.sh@819 -- # '[' -z 125526 ']' 00:20:39.328 22:28:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:39.328 22:28:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.328 22:28:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:39.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:39.328 22:28:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.328 22:28:38 -- common/autotest_common.sh@10 -- # set +x 00:20:39.328 [2024-07-14 22:28:38.250276] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:39.328 [2024-07-14 22:28:38.251106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125526 ] 00:20:39.328 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:39.328 Zero copy mechanism will not be used. 00:20:39.586 [2024-07-14 22:28:38.412353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.843 [2024-07-14 22:28:38.672543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.843 [2024-07-14 22:28:38.868907] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.099 22:28:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:40.099 22:28:39 -- common/autotest_common.sh@852 -- # return 0 00:20:40.099 22:28:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:40.099 22:28:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:40.099 22:28:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:40.356 BaseBdev1 00:20:40.356 22:28:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:40.356 22:28:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:40.356 22:28:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:40.614 BaseBdev2 00:20:40.614 22:28:39 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:40.872 spare_malloc 00:20:40.872 22:28:39 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:41.130 spare_delay 00:20:41.130 22:28:40 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:41.387 [2024-07-14 22:28:40.318731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:41.388 [2024-07-14 22:28:40.318970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.388 [2024-07-14 22:28:40.319041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:41.388 [2024-07-14 22:28:40.319275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.388 [2024-07-14 22:28:40.321808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.388 [2024-07-14 22:28:40.322022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:41.388 spare 00:20:41.388 22:28:40 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:41.645 [2024-07-14 22:28:40.518985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.645 [2024-07-14 22:28:40.520952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.645 [2024-07-14 22:28:40.521203] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:20:41.645 [2024-07-14 22:28:40.521313] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:41.645 [2024-07-14 22:28:40.521585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:41.645 [2024-07-14 22:28:40.522104] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:20:41.645 [2024-07-14 22:28:40.522261] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:20:41.645 [2024-07-14 22:28:40.522547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.645 22:28:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.902 22:28:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.902 "name": "raid_bdev1", 00:20:41.902 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:41.902 "strip_size_kb": 0, 00:20:41.902 "state": "online", 00:20:41.902 "raid_level": "raid1", 00:20:41.902 "superblock": false, 00:20:41.902 "num_base_bdevs": 2, 00:20:41.902 "num_base_bdevs_discovered": 2, 00:20:41.902 "num_base_bdevs_operational": 2, 00:20:41.902 "base_bdevs_list": [ 00:20:41.902 { 00:20:41.902 "name": "BaseBdev1", 00:20:41.902 "uuid": "2de16af7-4d03-4c16-aec0-d1ec5387765c", 00:20:41.902 "is_configured": true, 00:20:41.902 "data_offset": 0, 00:20:41.902 "data_size": 65536 00:20:41.902 }, 00:20:41.902 { 00:20:41.902 "name": "BaseBdev2", 00:20:41.902 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:41.902 "is_configured": true, 00:20:41.902 "data_offset": 0, 00:20:41.902 "data_size": 65536 00:20:41.902 } 00:20:41.902 ] 00:20:41.902 }' 00:20:41.902 22:28:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.902 22:28:40 -- common/autotest_common.sh@10 -- # set +x 00:20:42.467 22:28:41 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:42.467 22:28:41 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:42.725 [2024-07-14 22:28:41.547325] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:42.725 22:28:41 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@12 -- # local i 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.725 22:28:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:42.983 [2024-07-14 22:28:41.967432] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:42.983 /dev/nbd0 00:20:42.983 22:28:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:42.983 22:28:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:42.983 22:28:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:42.983 22:28:42 -- common/autotest_common.sh@857 -- # local i 00:20:42.983 22:28:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:42.983 22:28:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:42.983 22:28:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:42.983 22:28:42 -- common/autotest_common.sh@861 -- # break 00:20:42.983 22:28:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:42.983 22:28:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:42.983 22:28:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.983 1+0 records in 00:20:42.983 1+0 records out 00:20:42.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360223 s, 11.4 MB/s 00:20:42.983 22:28:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.983 22:28:42 -- common/autotest_common.sh@874 -- # size=4096 00:20:42.983 22:28:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.983 22:28:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:42.983 22:28:42 -- common/autotest_common.sh@877 -- # return 0 00:20:42.983 22:28:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.983 22:28:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.983 22:28:42 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:42.983 22:28:42 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:42.983 22:28:42 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:48.257 65536+0 records in 00:20:48.257 65536+0 records out 00:20:48.257 33554432 bytes (34 MB, 32 MiB) copied, 4.79417 s, 7.0 MB/s 00:20:48.257 22:28:46 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:48.257 22:28:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:48.257 22:28:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:48.257 22:28:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.257 22:28:46 -- bdev/nbd_common.sh@51 -- # local i 00:20:48.257 22:28:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.257 22:28:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:48.257 [2024-07-14 22:28:47.074026] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@41 -- # break 00:20:48.257 22:28:47 -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.257 22:28:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:48.515 [2024-07-14 22:28:47.345774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.515 22:28:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.773 22:28:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.773 "name": "raid_bdev1", 00:20:48.773 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:48.773 "strip_size_kb": 0, 00:20:48.773 "state": "online", 00:20:48.773 "raid_level": "raid1", 00:20:48.773 "superblock": false, 00:20:48.773 "num_base_bdevs": 2, 00:20:48.773 "num_base_bdevs_discovered": 1, 00:20:48.773 "num_base_bdevs_operational": 1, 00:20:48.773 "base_bdevs_list": [ 00:20:48.773 { 00:20:48.773 "name": null, 00:20:48.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.773 "is_configured": false, 00:20:48.773 "data_offset": 0, 00:20:48.773 "data_size": 65536 00:20:48.773 }, 00:20:48.773 { 00:20:48.773 "name": "BaseBdev2", 00:20:48.773 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:48.773 "is_configured": true, 00:20:48.773 "data_offset": 0, 00:20:48.773 "data_size": 65536 00:20:48.773 } 00:20:48.773 ] 00:20:48.773 }' 00:20:48.773 22:28:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.773 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.353 22:28:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.622 [2024-07-14 22:28:48.426177] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:49.622 [2024-07-14 22:28:48.426264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.622 [2024-07-14 22:28:48.440225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:20:49.622 [2024-07-14 22:28:48.442478] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.622 22:28:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.556 22:28:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.814 22:28:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.814 "name": "raid_bdev1", 00:20:50.814 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:50.814 "strip_size_kb": 0, 00:20:50.814 "state": "online", 00:20:50.814 "raid_level": "raid1", 00:20:50.814 "superblock": false, 00:20:50.814 "num_base_bdevs": 2, 00:20:50.814 "num_base_bdevs_discovered": 2, 00:20:50.814 "num_base_bdevs_operational": 2, 00:20:50.814 "process": { 00:20:50.814 "type": "rebuild", 00:20:50.814 "target": "spare", 00:20:50.814 "progress": { 00:20:50.814 "blocks": 24576, 00:20:50.814 "percent": 37 00:20:50.814 } 00:20:50.814 }, 00:20:50.814 "base_bdevs_list": [ 00:20:50.814 { 00:20:50.814 "name": "spare", 00:20:50.814 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:50.814 "is_configured": true, 00:20:50.814 "data_offset": 0, 00:20:50.814 "data_size": 65536 00:20:50.814 }, 00:20:50.814 { 00:20:50.814 "name": "BaseBdev2", 00:20:50.814 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:50.814 "is_configured": true, 00:20:50.814 "data_offset": 0, 00:20:50.814 "data_size": 65536 00:20:50.814 } 00:20:50.814 ] 00:20:50.814 }' 00:20:50.814 22:28:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.814 22:28:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.814 22:28:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.814 22:28:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.814 22:28:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:51.072 [2024-07-14 22:28:49.952368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.072 [2024-07-14 22:28:49.952938] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:51.072 [2024-07-14 22:28:49.953059] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.072 22:28:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.330 22:28:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.330 "name": "raid_bdev1", 00:20:51.330 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:51.330 "strip_size_kb": 0, 00:20:51.330 "state": "online", 00:20:51.330 "raid_level": "raid1", 00:20:51.330 "superblock": false, 00:20:51.330 "num_base_bdevs": 2, 00:20:51.330 "num_base_bdevs_discovered": 1, 00:20:51.330 "num_base_bdevs_operational": 1, 00:20:51.330 "base_bdevs_list": [ 00:20:51.330 { 00:20:51.330 "name": null, 00:20:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.330 "is_configured": false, 00:20:51.330 "data_offset": 0, 00:20:51.330 "data_size": 65536 00:20:51.330 }, 00:20:51.330 { 00:20:51.330 "name": "BaseBdev2", 00:20:51.330 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:51.330 "is_configured": true, 00:20:51.330 "data_offset": 0, 00:20:51.330 "data_size": 65536 00:20:51.330 } 00:20:51.330 ] 00:20:51.330 }' 00:20:51.330 22:28:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.330 22:28:50 -- common/autotest_common.sh@10 -- # set +x 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.896 22:28:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.154 22:28:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:52.154 "name": "raid_bdev1", 00:20:52.154 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:52.154 "strip_size_kb": 0, 00:20:52.154 "state": "online", 00:20:52.154 "raid_level": "raid1", 00:20:52.154 "superblock": false, 00:20:52.154 "num_base_bdevs": 2, 00:20:52.154 "num_base_bdevs_discovered": 1, 00:20:52.154 "num_base_bdevs_operational": 1, 00:20:52.154 "base_bdevs_list": [ 00:20:52.154 { 00:20:52.154 "name": null, 00:20:52.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.154 "is_configured": false, 00:20:52.154 "data_offset": 0, 00:20:52.154 "data_size": 65536 00:20:52.154 }, 00:20:52.154 { 00:20:52.154 "name": "BaseBdev2", 00:20:52.154 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:52.154 "is_configured": true, 00:20:52.155 "data_offset": 0, 00:20:52.155 "data_size": 65536 00:20:52.155 } 00:20:52.155 ] 00:20:52.155 }' 00:20:52.155 22:28:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:52.155 22:28:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:52.155 22:28:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:52.413 22:28:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:52.413 22:28:51 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:52.413 [2024-07-14 22:28:51.465581] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:52.413 [2024-07-14 22:28:51.465664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:52.671 [2024-07-14 22:28:51.479986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:20:52.671 [2024-07-14 22:28:51.482420] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.671 22:28:51 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.606 22:28:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.865 "name": "raid_bdev1", 00:20:53.865 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:53.865 "strip_size_kb": 0, 00:20:53.865 "state": "online", 00:20:53.865 "raid_level": "raid1", 00:20:53.865 "superblock": false, 00:20:53.865 "num_base_bdevs": 2, 00:20:53.865 "num_base_bdevs_discovered": 2, 00:20:53.865 "num_base_bdevs_operational": 2, 00:20:53.865 "process": { 00:20:53.865 "type": "rebuild", 00:20:53.865 "target": "spare", 00:20:53.865 "progress": { 00:20:53.865 "blocks": 24576, 00:20:53.865 "percent": 37 00:20:53.865 } 00:20:53.865 }, 00:20:53.865 "base_bdevs_list": [ 00:20:53.865 { 00:20:53.865 "name": "spare", 00:20:53.865 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:53.865 "is_configured": true, 00:20:53.865 "data_offset": 0, 00:20:53.865 "data_size": 65536 00:20:53.865 }, 00:20:53.865 { 00:20:53.865 "name": "BaseBdev2", 00:20:53.865 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:53.865 "is_configured": true, 00:20:53.865 "data_offset": 0, 00:20:53.865 "data_size": 65536 00:20:53.865 } 00:20:53.865 ] 00:20:53.865 }' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@657 -- # local timeout=389 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.865 22:28:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.124 22:28:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.124 "name": "raid_bdev1", 00:20:54.124 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:54.124 "strip_size_kb": 0, 00:20:54.124 "state": "online", 00:20:54.124 "raid_level": "raid1", 00:20:54.124 "superblock": false, 00:20:54.124 "num_base_bdevs": 2, 00:20:54.124 "num_base_bdevs_discovered": 2, 00:20:54.124 "num_base_bdevs_operational": 2, 00:20:54.124 "process": { 00:20:54.124 "type": "rebuild", 00:20:54.124 "target": "spare", 00:20:54.124 "progress": { 00:20:54.124 "blocks": 30720, 00:20:54.124 "percent": 46 00:20:54.124 } 00:20:54.124 }, 00:20:54.124 "base_bdevs_list": [ 00:20:54.124 { 00:20:54.124 "name": "spare", 00:20:54.124 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:54.124 "is_configured": true, 00:20:54.124 "data_offset": 0, 00:20:54.124 "data_size": 65536 00:20:54.124 }, 00:20:54.124 { 00:20:54.124 "name": "BaseBdev2", 00:20:54.124 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:54.124 "is_configured": true, 00:20:54.124 "data_offset": 0, 00:20:54.124 "data_size": 65536 00:20:54.124 } 00:20:54.124 ] 00:20:54.124 }' 00:20:54.124 22:28:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.124 22:28:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.124 22:28:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.124 22:28:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.124 22:28:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.496 "name": "raid_bdev1", 00:20:55.496 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:55.496 "strip_size_kb": 0, 00:20:55.496 "state": "online", 00:20:55.496 "raid_level": "raid1", 00:20:55.496 "superblock": false, 00:20:55.496 "num_base_bdevs": 2, 00:20:55.496 "num_base_bdevs_discovered": 2, 00:20:55.496 "num_base_bdevs_operational": 2, 00:20:55.496 "process": { 00:20:55.496 "type": "rebuild", 00:20:55.496 "target": "spare", 00:20:55.496 "progress": { 00:20:55.496 "blocks": 57344, 00:20:55.496 "percent": 87 00:20:55.496 } 00:20:55.496 }, 00:20:55.496 "base_bdevs_list": [ 00:20:55.496 { 00:20:55.496 "name": "spare", 00:20:55.496 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:55.496 "is_configured": true, 00:20:55.496 "data_offset": 0, 00:20:55.496 "data_size": 65536 00:20:55.496 }, 00:20:55.496 { 00:20:55.496 "name": "BaseBdev2", 00:20:55.496 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:55.496 "is_configured": true, 00:20:55.496 "data_offset": 0, 00:20:55.496 "data_size": 65536 00:20:55.496 } 00:20:55.496 ] 00:20:55.496 }' 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.496 22:28:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:55.754 [2024-07-14 22:28:54.703220] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:55.754 [2024-07-14 22:28:54.703294] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:55.754 [2024-07-14 22:28:54.703390] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.686 22:28:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.944 "name": "raid_bdev1", 00:20:56.944 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:56.944 "strip_size_kb": 0, 00:20:56.944 "state": "online", 00:20:56.944 "raid_level": "raid1", 00:20:56.944 "superblock": false, 00:20:56.944 "num_base_bdevs": 2, 00:20:56.944 "num_base_bdevs_discovered": 2, 00:20:56.944 "num_base_bdevs_operational": 2, 00:20:56.944 "base_bdevs_list": [ 00:20:56.944 { 00:20:56.944 "name": "spare", 00:20:56.944 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:56.944 "is_configured": true, 00:20:56.944 "data_offset": 0, 00:20:56.944 "data_size": 65536 00:20:56.944 }, 00:20:56.944 { 00:20:56.944 "name": "BaseBdev2", 00:20:56.944 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:56.944 "is_configured": true, 00:20:56.944 "data_offset": 0, 00:20:56.944 "data_size": 65536 00:20:56.944 } 00:20:56.944 ] 00:20:56.944 }' 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@660 -- # break 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.944 22:28:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.202 "name": "raid_bdev1", 00:20:57.202 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:57.202 "strip_size_kb": 0, 00:20:57.202 "state": "online", 00:20:57.202 "raid_level": "raid1", 00:20:57.202 "superblock": false, 00:20:57.202 "num_base_bdevs": 2, 00:20:57.202 "num_base_bdevs_discovered": 2, 00:20:57.202 "num_base_bdevs_operational": 2, 00:20:57.202 "base_bdevs_list": [ 00:20:57.202 { 00:20:57.202 "name": "spare", 00:20:57.202 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:57.202 "is_configured": true, 00:20:57.202 "data_offset": 0, 00:20:57.202 "data_size": 65536 00:20:57.202 }, 00:20:57.202 { 00:20:57.202 "name": "BaseBdev2", 00:20:57.202 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:57.202 "is_configured": true, 00:20:57.202 "data_offset": 0, 00:20:57.202 "data_size": 65536 00:20:57.202 } 00:20:57.202 ] 00:20:57.202 }' 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.202 22:28:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.460 22:28:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.460 "name": "raid_bdev1", 00:20:57.460 "uuid": "6c77dfb9-f536-477f-806e-f4e4b30b5f08", 00:20:57.460 "strip_size_kb": 0, 00:20:57.460 "state": "online", 00:20:57.460 "raid_level": "raid1", 00:20:57.460 "superblock": false, 00:20:57.460 "num_base_bdevs": 2, 00:20:57.460 "num_base_bdevs_discovered": 2, 00:20:57.460 "num_base_bdevs_operational": 2, 00:20:57.460 "base_bdevs_list": [ 00:20:57.460 { 00:20:57.460 "name": "spare", 00:20:57.460 "uuid": "0b38ffe9-c593-503c-8795-577cb4242628", 00:20:57.460 "is_configured": true, 00:20:57.460 "data_offset": 0, 00:20:57.460 "data_size": 65536 00:20:57.460 }, 00:20:57.460 { 00:20:57.460 "name": "BaseBdev2", 00:20:57.460 "uuid": "8c7b6543-5c7e-4b1b-86d4-4e8ced9a2acf", 00:20:57.460 "is_configured": true, 00:20:57.460 "data_offset": 0, 00:20:57.460 "data_size": 65536 00:20:57.460 } 00:20:57.460 ] 00:20:57.460 }' 00:20:57.460 22:28:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.460 22:28:56 -- common/autotest_common.sh@10 -- # set +x 00:20:58.394 22:28:57 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:58.394 [2024-07-14 22:28:57.330468] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.394 [2024-07-14 22:28:57.330513] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.395 [2024-07-14 22:28:57.330621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.395 [2024-07-14 22:28:57.330702] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.395 [2024-07-14 22:28:57.330716] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:58.395 22:28:57 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.395 22:28:57 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:58.653 22:28:57 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:58.653 22:28:57 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:58.653 22:28:57 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@12 -- # local i 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:58.653 22:28:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:58.912 /dev/nbd0 00:20:58.912 22:28:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:58.912 22:28:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:58.912 22:28:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:58.912 22:28:57 -- common/autotest_common.sh@857 -- # local i 00:20:58.912 22:28:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:58.912 22:28:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:58.912 22:28:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:58.912 22:28:57 -- common/autotest_common.sh@861 -- # break 00:20:58.912 22:28:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:58.912 22:28:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:58.912 22:28:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:58.912 1+0 records in 00:20:58.912 1+0 records out 00:20:58.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025069 s, 16.3 MB/s 00:20:58.912 22:28:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.912 22:28:57 -- common/autotest_common.sh@874 -- # size=4096 00:20:58.912 22:28:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.912 22:28:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:58.912 22:28:57 -- common/autotest_common.sh@877 -- # return 0 00:20:58.912 22:28:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:58.912 22:28:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:58.912 22:28:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:59.171 /dev/nbd1 00:20:59.171 22:28:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:59.171 22:28:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:59.171 22:28:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:59.171 22:28:58 -- common/autotest_common.sh@857 -- # local i 00:20:59.171 22:28:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:59.171 22:28:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:59.171 22:28:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:59.171 22:28:58 -- common/autotest_common.sh@861 -- # break 00:20:59.171 22:28:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:59.171 22:28:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:59.171 22:28:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.171 1+0 records in 00:20:59.171 1+0 records out 00:20:59.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361062 s, 11.3 MB/s 00:20:59.171 22:28:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.171 22:28:58 -- common/autotest_common.sh@874 -- # size=4096 00:20:59.171 22:28:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.171 22:28:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:59.171 22:28:58 -- common/autotest_common.sh@877 -- # return 0 00:20:59.171 22:28:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.171 22:28:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.171 22:28:58 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:59.430 22:28:58 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@51 -- # local i 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:59.430 22:28:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@41 -- # break 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.689 22:28:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:59.947 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:59.948 22:28:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.948 22:28:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:59.948 22:28:58 -- bdev/nbd_common.sh@41 -- # break 00:20:59.948 22:28:58 -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.948 22:28:58 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:59.948 22:28:58 -- bdev/bdev_raid.sh@709 -- # killprocess 125526 00:20:59.948 22:28:58 -- common/autotest_common.sh@926 -- # '[' -z 125526 ']' 00:20:59.948 22:28:58 -- common/autotest_common.sh@930 -- # kill -0 125526 00:20:59.948 22:28:58 -- common/autotest_common.sh@931 -- # uname 00:20:59.948 22:28:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.948 22:28:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125526 00:20:59.948 killing process with pid 125526 00:20:59.948 Received shutdown signal, test time was about 60.000000 seconds 00:20:59.948 00:20:59.948 Latency(us) 00:20:59.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.948 =================================================================================================================== 00:20:59.948 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.948 22:28:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.948 22:28:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.948 22:28:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125526' 00:20:59.948 22:28:58 -- common/autotest_common.sh@945 -- # kill 125526 00:20:59.948 22:28:58 -- common/autotest_common.sh@950 -- # wait 125526 00:20:59.948 [2024-07-14 22:28:58.938108] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.207 [2024-07-14 22:28:59.153568] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.584 ************************************ 00:21:01.584 END TEST raid_rebuild_test 00:21:01.584 ************************************ 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:01.584 00:21:01.584 real 0m22.022s 00:21:01.584 user 0m29.974s 00:21:01.584 sys 0m4.096s 00:21:01.584 22:29:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.584 22:29:00 -- common/autotest_common.sh@10 -- # set +x 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:21:01.584 22:29:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:01.584 22:29:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.584 22:29:00 -- common/autotest_common.sh@10 -- # set +x 00:21:01.584 ************************************ 00:21:01.584 START TEST raid_rebuild_test_sb 00:21:01.584 ************************************ 00:21:01.584 22:29:00 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@544 -- # raid_pid=126139 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126139 /var/tmp/spdk-raid.sock 00:21:01.584 22:29:00 -- common/autotest_common.sh@819 -- # '[' -z 126139 ']' 00:21:01.584 22:29:00 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:01.584 22:29:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.584 22:29:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.585 22:29:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.585 22:29:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.585 22:29:00 -- common/autotest_common.sh@10 -- # set +x 00:21:01.585 [2024-07-14 22:29:00.336602] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:01.585 [2024-07-14 22:29:00.337438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126139 ] 00:21:01.585 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:01.585 Zero copy mechanism will not be used. 00:21:01.585 [2024-07-14 22:29:00.507279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.843 [2024-07-14 22:29:00.710846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.843 [2024-07-14 22:29:00.907761] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.410 22:29:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.410 22:29:01 -- common/autotest_common.sh@852 -- # return 0 00:21:02.410 22:29:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:02.410 22:29:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:02.410 22:29:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:02.678 BaseBdev1_malloc 00:21:02.678 22:29:01 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:02.958 [2024-07-14 22:29:01.767167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:02.958 [2024-07-14 22:29:01.767296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.958 [2024-07-14 22:29:01.767338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:02.958 [2024-07-14 22:29:01.767380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.958 [2024-07-14 22:29:01.769682] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.958 [2024-07-14 22:29:01.769729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:02.958 BaseBdev1 00:21:02.958 22:29:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:02.958 22:29:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:02.958 22:29:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:02.958 BaseBdev2_malloc 00:21:03.217 22:29:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:03.217 [2024-07-14 22:29:02.217565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:03.217 [2024-07-14 22:29:02.217688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.217 [2024-07-14 22:29:02.217746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:03.217 [2024-07-14 22:29:02.217804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.217 [2024-07-14 22:29:02.220278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.217 [2024-07-14 22:29:02.220323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:03.217 BaseBdev2 00:21:03.217 22:29:02 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:03.475 spare_malloc 00:21:03.475 22:29:02 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:03.734 spare_delay 00:21:03.734 22:29:02 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:03.992 [2024-07-14 22:29:02.887684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.992 [2024-07-14 22:29:02.887806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.992 [2024-07-14 22:29:02.887854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:03.992 [2024-07-14 22:29:02.887898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.992 [2024-07-14 22:29:02.890645] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.992 [2024-07-14 22:29:02.890716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.992 spare 00:21:03.992 22:29:02 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:04.251 [2024-07-14 22:29:03.155868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.251 [2024-07-14 22:29:03.157602] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.251 [2024-07-14 22:29:03.157881] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:04.251 [2024-07-14 22:29:03.157904] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:04.251 [2024-07-14 22:29:03.158050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:04.251 [2024-07-14 22:29:03.158437] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:04.251 [2024-07-14 22:29:03.158465] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:04.251 [2024-07-14 22:29:03.158616] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.251 22:29:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.510 22:29:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.510 "name": "raid_bdev1", 00:21:04.510 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:04.510 "strip_size_kb": 0, 00:21:04.510 "state": "online", 00:21:04.510 "raid_level": "raid1", 00:21:04.510 "superblock": true, 00:21:04.510 "num_base_bdevs": 2, 00:21:04.510 "num_base_bdevs_discovered": 2, 00:21:04.510 "num_base_bdevs_operational": 2, 00:21:04.510 "base_bdevs_list": [ 00:21:04.510 { 00:21:04.510 "name": "BaseBdev1", 00:21:04.510 "uuid": "5012804b-617f-5ca0-a8c0-bf529bc8048c", 00:21:04.510 "is_configured": true, 00:21:04.510 "data_offset": 2048, 00:21:04.510 "data_size": 63488 00:21:04.510 }, 00:21:04.510 { 00:21:04.510 "name": "BaseBdev2", 00:21:04.510 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:04.510 "is_configured": true, 00:21:04.510 "data_offset": 2048, 00:21:04.510 "data_size": 63488 00:21:04.510 } 00:21:04.510 ] 00:21:04.510 }' 00:21:04.510 22:29:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.510 22:29:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.075 22:29:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:05.075 22:29:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:05.335 [2024-07-14 22:29:04.312640] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.335 22:29:04 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:05.335 22:29:04 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.335 22:29:04 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:05.593 22:29:04 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:05.593 22:29:04 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:05.593 22:29:04 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:05.593 22:29:04 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@12 -- # local i 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.593 22:29:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:05.850 [2024-07-14 22:29:04.832451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:05.850 /dev/nbd0 00:21:05.850 22:29:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:05.850 22:29:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:05.850 22:29:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:05.850 22:29:04 -- common/autotest_common.sh@857 -- # local i 00:21:05.850 22:29:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:05.850 22:29:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:05.850 22:29:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:05.850 22:29:04 -- common/autotest_common.sh@861 -- # break 00:21:05.850 22:29:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:05.850 22:29:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:05.850 22:29:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.850 1+0 records in 00:21:05.850 1+0 records out 00:21:05.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327169 s, 12.5 MB/s 00:21:05.850 22:29:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.850 22:29:04 -- common/autotest_common.sh@874 -- # size=4096 00:21:05.850 22:29:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.850 22:29:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:05.851 22:29:04 -- common/autotest_common.sh@877 -- # return 0 00:21:05.851 22:29:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.851 22:29:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.851 22:29:04 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:05.851 22:29:04 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:05.851 22:29:04 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:11.112 63488+0 records in 00:21:11.112 63488+0 records out 00:21:11.112 32505856 bytes (33 MB, 31 MiB) copied, 4.80287 s, 6.8 MB/s 00:21:11.112 22:29:09 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@51 -- # local i 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:11.112 22:29:09 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:11.112 [2024-07-14 22:29:09.939910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.112 22:29:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:11.112 22:29:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.112 22:29:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:11.112 22:29:10 -- bdev/nbd_common.sh@41 -- # break 00:21:11.112 22:29:10 -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.112 22:29:10 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:11.370 [2024-07-14 22:29:10.215502] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.370 "name": "raid_bdev1", 00:21:11.370 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:11.370 "strip_size_kb": 0, 00:21:11.370 "state": "online", 00:21:11.370 "raid_level": "raid1", 00:21:11.370 "superblock": true, 00:21:11.370 "num_base_bdevs": 2, 00:21:11.370 "num_base_bdevs_discovered": 1, 00:21:11.370 "num_base_bdevs_operational": 1, 00:21:11.370 "base_bdevs_list": [ 00:21:11.370 { 00:21:11.370 "name": null, 00:21:11.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.370 "is_configured": false, 00:21:11.370 "data_offset": 2048, 00:21:11.370 "data_size": 63488 00:21:11.370 }, 00:21:11.370 { 00:21:11.370 "name": "BaseBdev2", 00:21:11.370 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:11.370 "is_configured": true, 00:21:11.370 "data_offset": 2048, 00:21:11.370 "data_size": 63488 00:21:11.370 } 00:21:11.370 ] 00:21:11.370 }' 00:21:11.370 22:29:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.370 22:29:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.302 22:29:11 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:12.559 [2024-07-14 22:29:11.395755] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:12.559 [2024-07-14 22:29:11.395817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.559 [2024-07-14 22:29:11.408700] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:21:12.559 [2024-07-14 22:29:11.410849] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.559 22:29:11 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.492 22:29:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.750 22:29:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.750 "name": "raid_bdev1", 00:21:13.750 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:13.750 "strip_size_kb": 0, 00:21:13.750 "state": "online", 00:21:13.750 "raid_level": "raid1", 00:21:13.750 "superblock": true, 00:21:13.750 "num_base_bdevs": 2, 00:21:13.750 "num_base_bdevs_discovered": 2, 00:21:13.750 "num_base_bdevs_operational": 2, 00:21:13.750 "process": { 00:21:13.750 "type": "rebuild", 00:21:13.750 "target": "spare", 00:21:13.750 "progress": { 00:21:13.750 "blocks": 24576, 00:21:13.750 "percent": 38 00:21:13.750 } 00:21:13.750 }, 00:21:13.750 "base_bdevs_list": [ 00:21:13.750 { 00:21:13.750 "name": "spare", 00:21:13.750 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:13.750 "is_configured": true, 00:21:13.750 "data_offset": 2048, 00:21:13.750 "data_size": 63488 00:21:13.750 }, 00:21:13.750 { 00:21:13.750 "name": "BaseBdev2", 00:21:13.750 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:13.750 "is_configured": true, 00:21:13.750 "data_offset": 2048, 00:21:13.750 "data_size": 63488 00:21:13.750 } 00:21:13.750 ] 00:21:13.750 }' 00:21:13.750 22:29:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.750 22:29:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.750 22:29:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.750 22:29:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.750 22:29:12 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:14.008 [2024-07-14 22:29:12.989410] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:14.008 [2024-07-14 22:29:13.021345] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:14.008 [2024-07-14 22:29:13.021440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.008 22:29:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.266 22:29:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.266 "name": "raid_bdev1", 00:21:14.266 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:14.266 "strip_size_kb": 0, 00:21:14.266 "state": "online", 00:21:14.266 "raid_level": "raid1", 00:21:14.266 "superblock": true, 00:21:14.266 "num_base_bdevs": 2, 00:21:14.266 "num_base_bdevs_discovered": 1, 00:21:14.266 "num_base_bdevs_operational": 1, 00:21:14.266 "base_bdevs_list": [ 00:21:14.266 { 00:21:14.266 "name": null, 00:21:14.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.266 "is_configured": false, 00:21:14.266 "data_offset": 2048, 00:21:14.266 "data_size": 63488 00:21:14.266 }, 00:21:14.266 { 00:21:14.266 "name": "BaseBdev2", 00:21:14.266 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:14.266 "is_configured": true, 00:21:14.266 "data_offset": 2048, 00:21:14.266 "data_size": 63488 00:21:14.266 } 00:21:14.266 ] 00:21:14.266 }' 00:21:14.266 22:29:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.266 22:29:13 -- common/autotest_common.sh@10 -- # set +x 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.200 22:29:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.200 22:29:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:15.200 "name": "raid_bdev1", 00:21:15.200 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:15.200 "strip_size_kb": 0, 00:21:15.200 "state": "online", 00:21:15.200 "raid_level": "raid1", 00:21:15.200 "superblock": true, 00:21:15.200 "num_base_bdevs": 2, 00:21:15.200 "num_base_bdevs_discovered": 1, 00:21:15.200 "num_base_bdevs_operational": 1, 00:21:15.200 "base_bdevs_list": [ 00:21:15.200 { 00:21:15.200 "name": null, 00:21:15.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.200 "is_configured": false, 00:21:15.200 "data_offset": 2048, 00:21:15.200 "data_size": 63488 00:21:15.200 }, 00:21:15.200 { 00:21:15.200 "name": "BaseBdev2", 00:21:15.201 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:15.201 "is_configured": true, 00:21:15.201 "data_offset": 2048, 00:21:15.201 "data_size": 63488 00:21:15.201 } 00:21:15.201 ] 00:21:15.201 }' 00:21:15.201 22:29:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.459 22:29:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:15.459 22:29:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.459 22:29:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:15.459 22:29:14 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:15.717 [2024-07-14 22:29:14.557105] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:15.717 [2024-07-14 22:29:14.557157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.717 [2024-07-14 22:29:14.570745] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:21:15.717 [2024-07-14 22:29:14.572930] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.717 22:29:14 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.649 22:29:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.906 "name": "raid_bdev1", 00:21:16.906 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:16.906 "strip_size_kb": 0, 00:21:16.906 "state": "online", 00:21:16.906 "raid_level": "raid1", 00:21:16.906 "superblock": true, 00:21:16.906 "num_base_bdevs": 2, 00:21:16.906 "num_base_bdevs_discovered": 2, 00:21:16.906 "num_base_bdevs_operational": 2, 00:21:16.906 "process": { 00:21:16.906 "type": "rebuild", 00:21:16.906 "target": "spare", 00:21:16.906 "progress": { 00:21:16.906 "blocks": 24576, 00:21:16.906 "percent": 38 00:21:16.906 } 00:21:16.906 }, 00:21:16.906 "base_bdevs_list": [ 00:21:16.906 { 00:21:16.906 "name": "spare", 00:21:16.906 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:16.906 "is_configured": true, 00:21:16.906 "data_offset": 2048, 00:21:16.906 "data_size": 63488 00:21:16.906 }, 00:21:16.906 { 00:21:16.906 "name": "BaseBdev2", 00:21:16.906 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:16.906 "is_configured": true, 00:21:16.906 "data_offset": 2048, 00:21:16.906 "data_size": 63488 00:21:16.906 } 00:21:16.906 ] 00:21:16.906 }' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:16.906 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@657 -- # local timeout=412 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.906 22:29:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.163 22:29:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:17.163 "name": "raid_bdev1", 00:21:17.163 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:17.163 "strip_size_kb": 0, 00:21:17.163 "state": "online", 00:21:17.163 "raid_level": "raid1", 00:21:17.163 "superblock": true, 00:21:17.163 "num_base_bdevs": 2, 00:21:17.163 "num_base_bdevs_discovered": 2, 00:21:17.163 "num_base_bdevs_operational": 2, 00:21:17.163 "process": { 00:21:17.163 "type": "rebuild", 00:21:17.163 "target": "spare", 00:21:17.163 "progress": { 00:21:17.163 "blocks": 30720, 00:21:17.163 "percent": 48 00:21:17.163 } 00:21:17.163 }, 00:21:17.163 "base_bdevs_list": [ 00:21:17.163 { 00:21:17.163 "name": "spare", 00:21:17.163 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:17.163 "is_configured": true, 00:21:17.163 "data_offset": 2048, 00:21:17.164 "data_size": 63488 00:21:17.164 }, 00:21:17.164 { 00:21:17.164 "name": "BaseBdev2", 00:21:17.164 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:17.164 "is_configured": true, 00:21:17.164 "data_offset": 2048, 00:21:17.164 "data_size": 63488 00:21:17.164 } 00:21:17.164 ] 00:21:17.164 }' 00:21:17.164 22:29:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:17.458 22:29:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.458 22:29:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:17.458 22:29:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.458 22:29:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.431 22:29:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.690 22:29:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.690 "name": "raid_bdev1", 00:21:18.690 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:18.690 "strip_size_kb": 0, 00:21:18.690 "state": "online", 00:21:18.690 "raid_level": "raid1", 00:21:18.690 "superblock": true, 00:21:18.690 "num_base_bdevs": 2, 00:21:18.690 "num_base_bdevs_discovered": 2, 00:21:18.690 "num_base_bdevs_operational": 2, 00:21:18.690 "process": { 00:21:18.690 "type": "rebuild", 00:21:18.690 "target": "spare", 00:21:18.690 "progress": { 00:21:18.690 "blocks": 59392, 00:21:18.690 "percent": 93 00:21:18.690 } 00:21:18.690 }, 00:21:18.690 "base_bdevs_list": [ 00:21:18.690 { 00:21:18.690 "name": "spare", 00:21:18.690 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:18.690 "is_configured": true, 00:21:18.690 "data_offset": 2048, 00:21:18.690 "data_size": 63488 00:21:18.690 }, 00:21:18.690 { 00:21:18.690 "name": "BaseBdev2", 00:21:18.690 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:18.690 "is_configured": true, 00:21:18.690 "data_offset": 2048, 00:21:18.690 "data_size": 63488 00:21:18.690 } 00:21:18.690 ] 00:21:18.690 }' 00:21:18.690 22:29:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.690 22:29:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.690 22:29:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.690 22:29:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.690 22:29:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:18.690 [2024-07-14 22:29:17.694060] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:18.690 [2024-07-14 22:29:17.694158] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:18.690 [2024-07-14 22:29:17.694319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.624 22:29:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.881 22:29:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.881 "name": "raid_bdev1", 00:21:19.881 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:19.881 "strip_size_kb": 0, 00:21:19.881 "state": "online", 00:21:19.881 "raid_level": "raid1", 00:21:19.881 "superblock": true, 00:21:19.881 "num_base_bdevs": 2, 00:21:19.881 "num_base_bdevs_discovered": 2, 00:21:19.881 "num_base_bdevs_operational": 2, 00:21:19.881 "base_bdevs_list": [ 00:21:19.881 { 00:21:19.881 "name": "spare", 00:21:19.881 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:19.881 "is_configured": true, 00:21:19.881 "data_offset": 2048, 00:21:19.881 "data_size": 63488 00:21:19.881 }, 00:21:19.881 { 00:21:19.881 "name": "BaseBdev2", 00:21:19.881 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:19.881 "is_configured": true, 00:21:19.881 "data_offset": 2048, 00:21:19.881 "data_size": 63488 00:21:19.881 } 00:21:19.881 ] 00:21:19.882 }' 00:21:19.882 22:29:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.139 22:29:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:20.139 22:29:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.139 22:29:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:20.139 22:29:19 -- bdev/bdev_raid.sh@660 -- # break 00:21:20.139 22:29:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.139 22:29:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.139 22:29:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:20.140 22:29:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:20.140 22:29:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.140 22:29:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.140 22:29:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.398 "name": "raid_bdev1", 00:21:20.398 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:20.398 "strip_size_kb": 0, 00:21:20.398 "state": "online", 00:21:20.398 "raid_level": "raid1", 00:21:20.398 "superblock": true, 00:21:20.398 "num_base_bdevs": 2, 00:21:20.398 "num_base_bdevs_discovered": 2, 00:21:20.398 "num_base_bdevs_operational": 2, 00:21:20.398 "base_bdevs_list": [ 00:21:20.398 { 00:21:20.398 "name": "spare", 00:21:20.398 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:20.398 "is_configured": true, 00:21:20.398 "data_offset": 2048, 00:21:20.398 "data_size": 63488 00:21:20.398 }, 00:21:20.398 { 00:21:20.398 "name": "BaseBdev2", 00:21:20.398 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:20.398 "is_configured": true, 00:21:20.398 "data_offset": 2048, 00:21:20.398 "data_size": 63488 00:21:20.398 } 00:21:20.398 ] 00:21:20.398 }' 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.398 22:29:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.656 22:29:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.656 "name": "raid_bdev1", 00:21:20.656 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:20.656 "strip_size_kb": 0, 00:21:20.656 "state": "online", 00:21:20.656 "raid_level": "raid1", 00:21:20.656 "superblock": true, 00:21:20.656 "num_base_bdevs": 2, 00:21:20.656 "num_base_bdevs_discovered": 2, 00:21:20.656 "num_base_bdevs_operational": 2, 00:21:20.656 "base_bdevs_list": [ 00:21:20.656 { 00:21:20.656 "name": "spare", 00:21:20.656 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:20.656 "is_configured": true, 00:21:20.656 "data_offset": 2048, 00:21:20.656 "data_size": 63488 00:21:20.656 }, 00:21:20.656 { 00:21:20.656 "name": "BaseBdev2", 00:21:20.656 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:20.656 "is_configured": true, 00:21:20.656 "data_offset": 2048, 00:21:20.656 "data_size": 63488 00:21:20.656 } 00:21:20.656 ] 00:21:20.656 }' 00:21:20.656 22:29:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.656 22:29:19 -- common/autotest_common.sh@10 -- # set +x 00:21:21.589 22:29:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:21.590 [2024-07-14 22:29:20.558553] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.590 [2024-07-14 22:29:20.558594] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.590 [2024-07-14 22:29:20.558738] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.590 [2024-07-14 22:29:20.558869] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.590 [2024-07-14 22:29:20.558885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:21.590 22:29:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.590 22:29:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:21.848 22:29:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:21.848 22:29:20 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:21.848 22:29:20 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@12 -- # local i 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:21.848 22:29:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:22.107 /dev/nbd0 00:21:22.107 22:29:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:22.107 22:29:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:22.107 22:29:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:22.107 22:29:21 -- common/autotest_common.sh@857 -- # local i 00:21:22.107 22:29:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:22.107 22:29:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:22.107 22:29:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:22.107 22:29:21 -- common/autotest_common.sh@861 -- # break 00:21:22.107 22:29:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:22.107 22:29:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:22.107 22:29:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.107 1+0 records in 00:21:22.107 1+0 records out 00:21:22.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529105 s, 7.7 MB/s 00:21:22.107 22:29:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.107 22:29:21 -- common/autotest_common.sh@874 -- # size=4096 00:21:22.107 22:29:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.107 22:29:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:22.107 22:29:21 -- common/autotest_common.sh@877 -- # return 0 00:21:22.107 22:29:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.107 22:29:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:22.107 22:29:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:22.366 /dev/nbd1 00:21:22.366 22:29:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:22.366 22:29:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:22.366 22:29:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:22.366 22:29:21 -- common/autotest_common.sh@857 -- # local i 00:21:22.366 22:29:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:22.366 22:29:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:22.366 22:29:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:22.366 22:29:21 -- common/autotest_common.sh@861 -- # break 00:21:22.366 22:29:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:22.366 22:29:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:22.366 22:29:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.366 1+0 records in 00:21:22.366 1+0 records out 00:21:22.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649514 s, 6.3 MB/s 00:21:22.366 22:29:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.366 22:29:21 -- common/autotest_common.sh@874 -- # size=4096 00:21:22.366 22:29:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.366 22:29:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:22.366 22:29:21 -- common/autotest_common.sh@877 -- # return 0 00:21:22.366 22:29:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.366 22:29:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:22.366 22:29:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:22.624 22:29:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:22.624 22:29:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.624 22:29:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:22.624 22:29:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.624 22:29:21 -- bdev/nbd_common.sh@51 -- # local i 00:21:22.624 22:29:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.624 22:29:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@41 -- # break 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.883 22:29:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:23.141 22:29:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:23.141 22:29:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:23.141 22:29:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:23.141 22:29:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:23.141 22:29:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.141 22:29:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:23.141 22:29:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:23.141 22:29:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:23.141 22:29:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.141 22:29:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:23.141 22:29:22 -- bdev/nbd_common.sh@41 -- # break 00:21:23.141 22:29:22 -- bdev/nbd_common.sh@45 -- # return 0 00:21:23.141 22:29:22 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:23.141 22:29:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:23.141 22:29:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:23.141 22:29:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:23.400 22:29:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:23.658 [2024-07-14 22:29:22.492173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:23.658 [2024-07-14 22:29:22.492272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.658 [2024-07-14 22:29:22.492309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:23.658 [2024-07-14 22:29:22.492336] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.659 [2024-07-14 22:29:22.494877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.659 [2024-07-14 22:29:22.494947] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:23.659 [2024-07-14 22:29:22.495102] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:23.659 [2024-07-14 22:29:22.495166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.659 BaseBdev1 00:21:23.659 22:29:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:23.659 22:29:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:23.659 22:29:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:23.917 22:29:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:23.917 [2024-07-14 22:29:22.940279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:23.917 [2024-07-14 22:29:22.940374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.917 [2024-07-14 22:29:22.940409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:23.917 [2024-07-14 22:29:22.940470] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.917 [2024-07-14 22:29:22.940987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.917 [2024-07-14 22:29:22.941105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:23.917 [2024-07-14 22:29:22.941236] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:23.917 [2024-07-14 22:29:22.941251] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:23.917 [2024-07-14 22:29:22.941273] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.917 [2024-07-14 22:29:22.941321] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:21:23.917 [2024-07-14 22:29:22.941397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.917 BaseBdev2 00:21:23.917 22:29:22 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:24.176 22:29:23 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:24.434 [2024-07-14 22:29:23.364416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:24.434 [2024-07-14 22:29:23.364548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.434 [2024-07-14 22:29:23.364595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:24.434 [2024-07-14 22:29:23.364619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.434 [2024-07-14 22:29:23.365241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.434 [2024-07-14 22:29:23.365324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:24.434 [2024-07-14 22:29:23.365453] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:24.434 [2024-07-14 22:29:23.365522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.434 spare 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.434 22:29:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.434 [2024-07-14 22:29:23.465648] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:21:24.434 [2024-07-14 22:29:23.465670] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:24.434 [2024-07-14 22:29:23.465810] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:21:24.434 [2024-07-14 22:29:23.466231] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:21:24.434 [2024-07-14 22:29:23.466280] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:21:24.434 [2024-07-14 22:29:23.466417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.692 22:29:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:24.692 "name": "raid_bdev1", 00:21:24.692 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:24.692 "strip_size_kb": 0, 00:21:24.692 "state": "online", 00:21:24.692 "raid_level": "raid1", 00:21:24.692 "superblock": true, 00:21:24.692 "num_base_bdevs": 2, 00:21:24.692 "num_base_bdevs_discovered": 2, 00:21:24.692 "num_base_bdevs_operational": 2, 00:21:24.692 "base_bdevs_list": [ 00:21:24.692 { 00:21:24.692 "name": "spare", 00:21:24.692 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:24.692 "is_configured": true, 00:21:24.692 "data_offset": 2048, 00:21:24.692 "data_size": 63488 00:21:24.692 }, 00:21:24.692 { 00:21:24.692 "name": "BaseBdev2", 00:21:24.692 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:24.692 "is_configured": true, 00:21:24.692 "data_offset": 2048, 00:21:24.692 "data_size": 63488 00:21:24.692 } 00:21:24.692 ] 00:21:24.692 }' 00:21:24.692 22:29:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:24.692 22:29:23 -- common/autotest_common.sh@10 -- # set +x 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.258 22:29:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.517 "name": "raid_bdev1", 00:21:25.517 "uuid": "63f85fcb-cf88-4b71-81f7-19ebeec6e43c", 00:21:25.517 "strip_size_kb": 0, 00:21:25.517 "state": "online", 00:21:25.517 "raid_level": "raid1", 00:21:25.517 "superblock": true, 00:21:25.517 "num_base_bdevs": 2, 00:21:25.517 "num_base_bdevs_discovered": 2, 00:21:25.517 "num_base_bdevs_operational": 2, 00:21:25.517 "base_bdevs_list": [ 00:21:25.517 { 00:21:25.517 "name": "spare", 00:21:25.517 "uuid": "03327bea-c7db-5d03-8584-99a4b6d67dc2", 00:21:25.517 "is_configured": true, 00:21:25.517 "data_offset": 2048, 00:21:25.517 "data_size": 63488 00:21:25.517 }, 00:21:25.517 { 00:21:25.517 "name": "BaseBdev2", 00:21:25.517 "uuid": "65354f5c-6b4d-528e-be40-a7e601bbdc07", 00:21:25.517 "is_configured": true, 00:21:25.517 "data_offset": 2048, 00:21:25.517 "data_size": 63488 00:21:25.517 } 00:21:25.517 ] 00:21:25.517 }' 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:25.517 22:29:24 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.775 22:29:24 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.775 22:29:24 -- bdev/bdev_raid.sh@709 -- # killprocess 126139 00:21:25.775 22:29:24 -- common/autotest_common.sh@926 -- # '[' -z 126139 ']' 00:21:25.775 22:29:24 -- common/autotest_common.sh@930 -- # kill -0 126139 00:21:25.775 22:29:24 -- common/autotest_common.sh@931 -- # uname 00:21:25.775 22:29:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:25.775 22:29:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126139 00:21:25.775 killing process with pid 126139 00:21:25.775 22:29:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:25.775 22:29:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:25.775 22:29:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126139' 00:21:25.776 22:29:24 -- common/autotest_common.sh@945 -- # kill 126139 00:21:25.776 22:29:24 -- common/autotest_common.sh@950 -- # wait 126139 00:21:25.776 Received shutdown signal, test time was about 60.000000 seconds 00:21:25.776 00:21:25.776 Latency(us) 00:21:25.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.776 =================================================================================================================== 00:21:25.776 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.776 [2024-07-14 22:29:24.813346] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.776 [2024-07-14 22:29:24.813496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.776 [2024-07-14 22:29:24.813587] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.776 [2024-07-14 22:29:24.813607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:26.034 [2024-07-14 22:29:25.025584] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.406 ************************************ 00:21:27.406 END TEST raid_rebuild_test_sb 00:21:27.406 ************************************ 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:27.406 00:21:27.406 real 0m25.841s 00:21:27.406 user 0m37.497s 00:21:27.406 sys 0m4.163s 00:21:27.406 22:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.406 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:21:27.406 22:29:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:27.406 22:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.406 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:21:27.406 ************************************ 00:21:27.406 START TEST raid_rebuild_test_io 00:21:27.406 ************************************ 00:21:27.406 22:29:26 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:27.406 22:29:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:27.407 22:29:26 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:27.407 22:29:26 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:27.407 22:29:26 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:27.407 22:29:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=126806 00:21:27.407 22:29:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126806 /var/tmp/spdk-raid.sock 00:21:27.407 22:29:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.407 22:29:26 -- common/autotest_common.sh@819 -- # '[' -z 126806 ']' 00:21:27.407 22:29:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:27.407 22:29:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:27.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:27.407 22:29:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:27.407 22:29:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:27.407 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:21:27.407 [2024-07-14 22:29:26.222702] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:27.407 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.407 Zero copy mechanism will not be used. 00:21:27.407 [2024-07-14 22:29:26.222879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126806 ] 00:21:27.407 [2024-07-14 22:29:26.379947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.664 [2024-07-14 22:29:26.574783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.922 [2024-07-14 22:29:26.763388] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.179 22:29:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:28.179 22:29:27 -- common/autotest_common.sh@852 -- # return 0 00:21:28.179 22:29:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:28.179 22:29:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:28.179 22:29:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:28.436 BaseBdev1 00:21:28.436 22:29:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:28.436 22:29:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:28.436 22:29:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.696 BaseBdev2 00:21:28.696 22:29:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:28.954 spare_malloc 00:21:28.954 22:29:27 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:29.211 spare_delay 00:21:29.211 22:29:28 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:29.469 [2024-07-14 22:29:28.411125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:29.469 [2024-07-14 22:29:28.411216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.469 [2024-07-14 22:29:28.411252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:29.469 [2024-07-14 22:29:28.411295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.469 [2024-07-14 22:29:28.413350] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.469 [2024-07-14 22:29:28.413395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:29.469 spare 00:21:29.469 22:29:28 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:29.727 [2024-07-14 22:29:28.651219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.727 [2024-07-14 22:29:28.652741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.727 [2024-07-14 22:29:28.652843] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:21:29.727 [2024-07-14 22:29:28.652856] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:29.727 [2024-07-14 22:29:28.653010] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:29.728 [2024-07-14 22:29:28.653425] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:21:29.728 [2024-07-14 22:29:28.653447] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:21:29.728 [2024-07-14 22:29:28.653607] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.728 22:29:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.987 22:29:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.987 "name": "raid_bdev1", 00:21:29.987 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:29.987 "strip_size_kb": 0, 00:21:29.987 "state": "online", 00:21:29.987 "raid_level": "raid1", 00:21:29.987 "superblock": false, 00:21:29.987 "num_base_bdevs": 2, 00:21:29.987 "num_base_bdevs_discovered": 2, 00:21:29.987 "num_base_bdevs_operational": 2, 00:21:29.987 "base_bdevs_list": [ 00:21:29.987 { 00:21:29.987 "name": "BaseBdev1", 00:21:29.987 "uuid": "35097045-947c-43ea-a372-67921ebd8619", 00:21:29.987 "is_configured": true, 00:21:29.987 "data_offset": 0, 00:21:29.987 "data_size": 65536 00:21:29.987 }, 00:21:29.987 { 00:21:29.987 "name": "BaseBdev2", 00:21:29.987 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:29.987 "is_configured": true, 00:21:29.987 "data_offset": 0, 00:21:29.987 "data_size": 65536 00:21:29.987 } 00:21:29.987 ] 00:21:29.987 }' 00:21:29.987 22:29:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.987 22:29:28 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 22:29:29 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:30.553 22:29:29 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:30.811 [2024-07-14 22:29:29.743563] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.811 22:29:29 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:30.811 22:29:29 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.811 22:29:29 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:31.069 22:29:29 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:31.069 22:29:29 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:31.069 22:29:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:31.069 22:29:29 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:31.069 [2024-07-14 22:29:30.047032] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:31.069 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.069 Zero copy mechanism will not be used. 00:21:31.069 Running I/O for 60 seconds... 00:21:31.069 [2024-07-14 22:29:30.115149] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:31.069 [2024-07-14 22:29:30.121032] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.328 22:29:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.586 22:29:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.586 "name": "raid_bdev1", 00:21:31.586 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:31.586 "strip_size_kb": 0, 00:21:31.586 "state": "online", 00:21:31.586 "raid_level": "raid1", 00:21:31.586 "superblock": false, 00:21:31.586 "num_base_bdevs": 2, 00:21:31.586 "num_base_bdevs_discovered": 1, 00:21:31.586 "num_base_bdevs_operational": 1, 00:21:31.586 "base_bdevs_list": [ 00:21:31.586 { 00:21:31.586 "name": null, 00:21:31.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.586 "is_configured": false, 00:21:31.586 "data_offset": 0, 00:21:31.586 "data_size": 65536 00:21:31.586 }, 00:21:31.586 { 00:21:31.586 "name": "BaseBdev2", 00:21:31.586 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:31.586 "is_configured": true, 00:21:31.586 "data_offset": 0, 00:21:31.586 "data_size": 65536 00:21:31.586 } 00:21:31.586 ] 00:21:31.586 }' 00:21:31.586 22:29:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.586 22:29:30 -- common/autotest_common.sh@10 -- # set +x 00:21:32.152 22:29:31 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:32.419 [2024-07-14 22:29:31.357665] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:32.419 [2024-07-14 22:29:31.357720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.419 [2024-07-14 22:29:31.404113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:32.419 [2024-07-14 22:29:31.406464] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:32.419 22:29:31 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:32.687 [2024-07-14 22:29:31.515851] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:32.687 [2024-07-14 22:29:31.516435] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:32.687 [2024-07-14 22:29:31.732260] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:32.687 [2024-07-14 22:29:31.732634] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:33.254 [2024-07-14 22:29:32.089874] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:33.254 [2024-07-14 22:29:32.311548] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:33.254 [2024-07-14 22:29:32.311934] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.512 22:29:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.771 [2024-07-14 22:29:32.640855] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:33.771 [2024-07-14 22:29:32.641538] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:33.771 22:29:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.771 "name": "raid_bdev1", 00:21:33.771 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:33.771 "strip_size_kb": 0, 00:21:33.771 "state": "online", 00:21:33.771 "raid_level": "raid1", 00:21:33.771 "superblock": false, 00:21:33.771 "num_base_bdevs": 2, 00:21:33.771 "num_base_bdevs_discovered": 2, 00:21:33.771 "num_base_bdevs_operational": 2, 00:21:33.771 "process": { 00:21:33.771 "type": "rebuild", 00:21:33.771 "target": "spare", 00:21:33.771 "progress": { 00:21:33.771 "blocks": 12288, 00:21:33.771 "percent": 18 00:21:33.771 } 00:21:33.771 }, 00:21:33.771 "base_bdevs_list": [ 00:21:33.771 { 00:21:33.771 "name": "spare", 00:21:33.771 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:33.771 "is_configured": true, 00:21:33.771 "data_offset": 0, 00:21:33.771 "data_size": 65536 00:21:33.771 }, 00:21:33.771 { 00:21:33.771 "name": "BaseBdev2", 00:21:33.771 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:33.771 "is_configured": true, 00:21:33.771 "data_offset": 0, 00:21:33.771 "data_size": 65536 00:21:33.771 } 00:21:33.771 ] 00:21:33.771 }' 00:21:33.771 22:29:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.771 22:29:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.771 22:29:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.771 22:29:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.771 22:29:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:34.030 [2024-07-14 22:29:32.852132] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:34.030 [2024-07-14 22:29:32.852427] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:34.030 [2024-07-14 22:29:32.947358] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:34.030 [2024-07-14 22:29:32.983277] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:34.030 [2024-07-14 22:29:32.993117] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.030 [2024-07-14 22:29:33.035348] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.030 22:29:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.289 22:29:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.289 "name": "raid_bdev1", 00:21:34.289 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:34.289 "strip_size_kb": 0, 00:21:34.289 "state": "online", 00:21:34.289 "raid_level": "raid1", 00:21:34.289 "superblock": false, 00:21:34.289 "num_base_bdevs": 2, 00:21:34.289 "num_base_bdevs_discovered": 1, 00:21:34.289 "num_base_bdevs_operational": 1, 00:21:34.289 "base_bdevs_list": [ 00:21:34.289 { 00:21:34.289 "name": null, 00:21:34.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.289 "is_configured": false, 00:21:34.289 "data_offset": 0, 00:21:34.289 "data_size": 65536 00:21:34.289 }, 00:21:34.289 { 00:21:34.289 "name": "BaseBdev2", 00:21:34.289 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:34.289 "is_configured": true, 00:21:34.289 "data_offset": 0, 00:21:34.289 "data_size": 65536 00:21:34.289 } 00:21:34.289 ] 00:21:34.289 }' 00:21:34.290 22:29:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.290 22:29:33 -- common/autotest_common.sh@10 -- # set +x 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.224 22:29:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.224 22:29:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.224 "name": "raid_bdev1", 00:21:35.224 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:35.224 "strip_size_kb": 0, 00:21:35.224 "state": "online", 00:21:35.224 "raid_level": "raid1", 00:21:35.224 "superblock": false, 00:21:35.224 "num_base_bdevs": 2, 00:21:35.224 "num_base_bdevs_discovered": 1, 00:21:35.224 "num_base_bdevs_operational": 1, 00:21:35.224 "base_bdevs_list": [ 00:21:35.224 { 00:21:35.224 "name": null, 00:21:35.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.224 "is_configured": false, 00:21:35.224 "data_offset": 0, 00:21:35.224 "data_size": 65536 00:21:35.224 }, 00:21:35.224 { 00:21:35.224 "name": "BaseBdev2", 00:21:35.224 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:35.224 "is_configured": true, 00:21:35.224 "data_offset": 0, 00:21:35.224 "data_size": 65536 00:21:35.224 } 00:21:35.224 ] 00:21:35.224 }' 00:21:35.224 22:29:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.224 22:29:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.224 22:29:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.224 22:29:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:35.224 22:29:34 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:35.482 [2024-07-14 22:29:34.512179] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:35.482 [2024-07-14 22:29:34.512258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:35.740 22:29:34 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:35.740 [2024-07-14 22:29:34.574712] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:35.740 [2024-07-14 22:29:34.576800] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:35.740 [2024-07-14 22:29:34.703828] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:35.740 [2024-07-14 22:29:34.704360] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:35.998 [2024-07-14 22:29:34.929033] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:35.998 [2024-07-14 22:29:34.929457] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:36.266 [2024-07-14 22:29:35.265758] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:36.525 [2024-07-14 22:29:35.489936] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.525 22:29:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.783 22:29:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:36.783 "name": "raid_bdev1", 00:21:36.783 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:36.783 "strip_size_kb": 0, 00:21:36.783 "state": "online", 00:21:36.783 "raid_level": "raid1", 00:21:36.783 "superblock": false, 00:21:36.783 "num_base_bdevs": 2, 00:21:36.783 "num_base_bdevs_discovered": 2, 00:21:36.783 "num_base_bdevs_operational": 2, 00:21:36.783 "process": { 00:21:36.783 "type": "rebuild", 00:21:36.783 "target": "spare", 00:21:36.783 "progress": { 00:21:36.783 "blocks": 12288, 00:21:36.783 "percent": 18 00:21:36.783 } 00:21:36.783 }, 00:21:36.783 "base_bdevs_list": [ 00:21:36.783 { 00:21:36.783 "name": "spare", 00:21:36.783 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:36.783 "is_configured": true, 00:21:36.783 "data_offset": 0, 00:21:36.783 "data_size": 65536 00:21:36.783 }, 00:21:36.783 { 00:21:36.783 "name": "BaseBdev2", 00:21:36.783 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:36.783 "is_configured": true, 00:21:36.783 "data_offset": 0, 00:21:36.783 "data_size": 65536 00:21:36.783 } 00:21:36.783 ] 00:21:36.783 }' 00:21:36.783 22:29:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@657 -- # local timeout=432 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.042 22:29:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.042 [2024-07-14 22:29:35.930503] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:37.301 22:29:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:37.301 "name": "raid_bdev1", 00:21:37.301 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:37.301 "strip_size_kb": 0, 00:21:37.301 "state": "online", 00:21:37.301 "raid_level": "raid1", 00:21:37.301 "superblock": false, 00:21:37.301 "num_base_bdevs": 2, 00:21:37.301 "num_base_bdevs_discovered": 2, 00:21:37.301 "num_base_bdevs_operational": 2, 00:21:37.301 "process": { 00:21:37.301 "type": "rebuild", 00:21:37.301 "target": "spare", 00:21:37.301 "progress": { 00:21:37.301 "blocks": 18432, 00:21:37.301 "percent": 28 00:21:37.301 } 00:21:37.301 }, 00:21:37.301 "base_bdevs_list": [ 00:21:37.301 { 00:21:37.301 "name": "spare", 00:21:37.301 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:37.301 "is_configured": true, 00:21:37.301 "data_offset": 0, 00:21:37.301 "data_size": 65536 00:21:37.301 }, 00:21:37.301 { 00:21:37.301 "name": "BaseBdev2", 00:21:37.301 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:37.301 "is_configured": true, 00:21:37.301 "data_offset": 0, 00:21:37.301 "data_size": 65536 00:21:37.301 } 00:21:37.301 ] 00:21:37.301 }' 00:21:37.301 22:29:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.301 22:29:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.301 22:29:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.301 [2024-07-14 22:29:36.265744] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:37.301 22:29:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.301 22:29:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:37.559 [2024-07-14 22:29:36.402074] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:37.817 [2024-07-14 22:29:36.743716] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:38.075 [2024-07-14 22:29:36.966316] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.333 22:29:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.592 22:29:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:38.592 "name": "raid_bdev1", 00:21:38.592 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:38.592 "strip_size_kb": 0, 00:21:38.592 "state": "online", 00:21:38.592 "raid_level": "raid1", 00:21:38.592 "superblock": false, 00:21:38.592 "num_base_bdevs": 2, 00:21:38.592 "num_base_bdevs_discovered": 2, 00:21:38.592 "num_base_bdevs_operational": 2, 00:21:38.592 "process": { 00:21:38.592 "type": "rebuild", 00:21:38.592 "target": "spare", 00:21:38.592 "progress": { 00:21:38.592 "blocks": 36864, 00:21:38.592 "percent": 56 00:21:38.592 } 00:21:38.592 }, 00:21:38.592 "base_bdevs_list": [ 00:21:38.592 { 00:21:38.592 "name": "spare", 00:21:38.592 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:38.592 "is_configured": true, 00:21:38.592 "data_offset": 0, 00:21:38.592 "data_size": 65536 00:21:38.592 }, 00:21:38.592 { 00:21:38.592 "name": "BaseBdev2", 00:21:38.592 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:38.592 "is_configured": true, 00:21:38.592 "data_offset": 0, 00:21:38.592 "data_size": 65536 00:21:38.592 } 00:21:38.592 ] 00:21:38.592 }' 00:21:38.592 22:29:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:38.592 22:29:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.592 22:29:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:38.850 22:29:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.850 22:29:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:39.107 [2024-07-14 22:29:38.029277] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:39.365 [2024-07-14 22:29:38.358696] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.931 [2024-07-14 22:29:38.804603] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:39.931 [2024-07-14 22:29:38.804903] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:39.931 "name": "raid_bdev1", 00:21:39.931 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:39.931 "strip_size_kb": 0, 00:21:39.931 "state": "online", 00:21:39.931 "raid_level": "raid1", 00:21:39.931 "superblock": false, 00:21:39.931 "num_base_bdevs": 2, 00:21:39.931 "num_base_bdevs_discovered": 2, 00:21:39.931 "num_base_bdevs_operational": 2, 00:21:39.931 "process": { 00:21:39.931 "type": "rebuild", 00:21:39.931 "target": "spare", 00:21:39.931 "progress": { 00:21:39.931 "blocks": 59392, 00:21:39.931 "percent": 90 00:21:39.931 } 00:21:39.931 }, 00:21:39.931 "base_bdevs_list": [ 00:21:39.931 { 00:21:39.931 "name": "spare", 00:21:39.931 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:39.931 "is_configured": true, 00:21:39.931 "data_offset": 0, 00:21:39.931 "data_size": 65536 00:21:39.931 }, 00:21:39.931 { 00:21:39.931 "name": "BaseBdev2", 00:21:39.931 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:39.931 "is_configured": true, 00:21:39.931 "data_offset": 0, 00:21:39.931 "data_size": 65536 00:21:39.931 } 00:21:39.931 ] 00:21:39.931 }' 00:21:39.931 22:29:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:40.189 22:29:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:40.189 22:29:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:40.189 22:29:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.189 22:29:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:40.189 [2024-07-14 22:29:39.236254] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:40.448 [2024-07-14 22:29:39.341940] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:40.448 [2024-07-14 22:29:39.343908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:41.384 "name": "raid_bdev1", 00:21:41.384 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:41.384 "strip_size_kb": 0, 00:21:41.384 "state": "online", 00:21:41.384 "raid_level": "raid1", 00:21:41.384 "superblock": false, 00:21:41.384 "num_base_bdevs": 2, 00:21:41.384 "num_base_bdevs_discovered": 2, 00:21:41.384 "num_base_bdevs_operational": 2, 00:21:41.384 "base_bdevs_list": [ 00:21:41.384 { 00:21:41.384 "name": "spare", 00:21:41.384 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:41.384 "is_configured": true, 00:21:41.384 "data_offset": 0, 00:21:41.384 "data_size": 65536 00:21:41.384 }, 00:21:41.384 { 00:21:41.384 "name": "BaseBdev2", 00:21:41.384 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:41.384 "is_configured": true, 00:21:41.384 "data_offset": 0, 00:21:41.384 "data_size": 65536 00:21:41.384 } 00:21:41.384 ] 00:21:41.384 }' 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@660 -- # break 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.384 22:29:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.643 22:29:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:41.643 "name": "raid_bdev1", 00:21:41.643 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:41.643 "strip_size_kb": 0, 00:21:41.643 "state": "online", 00:21:41.643 "raid_level": "raid1", 00:21:41.643 "superblock": false, 00:21:41.643 "num_base_bdevs": 2, 00:21:41.643 "num_base_bdevs_discovered": 2, 00:21:41.643 "num_base_bdevs_operational": 2, 00:21:41.643 "base_bdevs_list": [ 00:21:41.643 { 00:21:41.643 "name": "spare", 00:21:41.643 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:41.643 "is_configured": true, 00:21:41.643 "data_offset": 0, 00:21:41.643 "data_size": 65536 00:21:41.643 }, 00:21:41.643 { 00:21:41.643 "name": "BaseBdev2", 00:21:41.643 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:41.643 "is_configured": true, 00:21:41.643 "data_offset": 0, 00:21:41.643 "data_size": 65536 00:21:41.643 } 00:21:41.643 ] 00:21:41.643 }' 00:21:41.643 22:29:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:41.643 22:29:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:41.643 22:29:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:41.901 22:29:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:41.901 22:29:40 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:41.901 22:29:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:41.901 22:29:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.902 "name": "raid_bdev1", 00:21:41.902 "uuid": "58875cc8-b975-4555-a5b5-697b72281250", 00:21:41.902 "strip_size_kb": 0, 00:21:41.902 "state": "online", 00:21:41.902 "raid_level": "raid1", 00:21:41.902 "superblock": false, 00:21:41.902 "num_base_bdevs": 2, 00:21:41.902 "num_base_bdevs_discovered": 2, 00:21:41.902 "num_base_bdevs_operational": 2, 00:21:41.902 "base_bdevs_list": [ 00:21:41.902 { 00:21:41.902 "name": "spare", 00:21:41.902 "uuid": "fc50ee9e-4ddd-5b3b-a1bb-ed753606937a", 00:21:41.902 "is_configured": true, 00:21:41.902 "data_offset": 0, 00:21:41.902 "data_size": 65536 00:21:41.902 }, 00:21:41.902 { 00:21:41.902 "name": "BaseBdev2", 00:21:41.902 "uuid": "10e76b34-ccf1-4ea3-afff-04cb1241c911", 00:21:41.902 "is_configured": true, 00:21:41.902 "data_offset": 0, 00:21:41.902 "data_size": 65536 00:21:41.902 } 00:21:41.902 ] 00:21:41.902 }' 00:21:41.902 22:29:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.902 22:29:40 -- common/autotest_common.sh@10 -- # set +x 00:21:42.838 22:29:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:42.838 [2024-07-14 22:29:41.852438] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.838 [2024-07-14 22:29:41.852481] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.097 00:21:43.097 Latency(us) 00:21:43.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.097 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:43.097 raid_bdev1 : 11.87 108.08 324.24 0.00 0.00 12849.68 305.34 117249.86 00:21:43.097 =================================================================================================================== 00:21:43.097 Total : 108.08 324.24 0.00 0.00 12849.68 305.34 117249.86 00:21:43.097 [2024-07-14 22:29:41.935135] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.097 [2024-07-14 22:29:41.935188] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.097 [2024-07-14 22:29:41.935270] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.097 [2024-07-14 22:29:41.935286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:21:43.097 0 00:21:43.097 22:29:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.097 22:29:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:43.356 22:29:42 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:43.356 22:29:42 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:43.356 22:29:42 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@12 -- # local i 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:43.356 22:29:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:43.615 /dev/nbd0 00:21:43.615 22:29:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:43.615 22:29:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:43.615 22:29:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:43.616 22:29:42 -- common/autotest_common.sh@857 -- # local i 00:21:43.616 22:29:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:43.616 22:29:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:43.616 22:29:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:43.616 22:29:42 -- common/autotest_common.sh@861 -- # break 00:21:43.616 22:29:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:43.616 22:29:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:43.616 22:29:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:43.616 1+0 records in 00:21:43.616 1+0 records out 00:21:43.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606727 s, 6.8 MB/s 00:21:43.616 22:29:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:43.616 22:29:42 -- common/autotest_common.sh@874 -- # size=4096 00:21:43.616 22:29:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:43.616 22:29:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:43.616 22:29:42 -- common/autotest_common.sh@877 -- # return 0 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:43.616 22:29:42 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:43.616 22:29:42 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:43.616 22:29:42 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@12 -- # local i 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:43.616 22:29:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:43.875 /dev/nbd1 00:21:43.875 22:29:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:43.875 22:29:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:43.875 22:29:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:43.875 22:29:42 -- common/autotest_common.sh@857 -- # local i 00:21:43.875 22:29:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:43.875 22:29:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:43.875 22:29:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:43.875 22:29:42 -- common/autotest_common.sh@861 -- # break 00:21:43.875 22:29:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:43.875 22:29:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:43.875 22:29:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:43.875 1+0 records in 00:21:43.875 1+0 records out 00:21:43.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628038 s, 6.5 MB/s 00:21:43.875 22:29:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:43.875 22:29:42 -- common/autotest_common.sh@874 -- # size=4096 00:21:43.875 22:29:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:43.875 22:29:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:43.875 22:29:42 -- common/autotest_common.sh@877 -- # return 0 00:21:43.875 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:43.875 22:29:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:43.875 22:29:42 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:44.134 22:29:42 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:44.134 22:29:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:44.134 22:29:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:44.134 22:29:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:44.134 22:29:42 -- bdev/nbd_common.sh@51 -- # local i 00:21:44.134 22:29:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:44.134 22:29:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@41 -- # break 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@45 -- # return 0 00:21:44.393 22:29:43 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@51 -- # local i 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:44.393 22:29:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:44.650 22:29:43 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:44.908 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:44.908 22:29:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:44.908 22:29:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:44.908 22:29:43 -- bdev/nbd_common.sh@41 -- # break 00:21:44.908 22:29:43 -- bdev/nbd_common.sh@45 -- # return 0 00:21:44.908 22:29:43 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:44.908 22:29:43 -- bdev/bdev_raid.sh@709 -- # killprocess 126806 00:21:44.908 22:29:43 -- common/autotest_common.sh@926 -- # '[' -z 126806 ']' 00:21:44.908 22:29:43 -- common/autotest_common.sh@930 -- # kill -0 126806 00:21:44.908 22:29:43 -- common/autotest_common.sh@931 -- # uname 00:21:44.908 22:29:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.908 22:29:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126806 00:21:44.908 killing process with pid 126806 00:21:44.908 Received shutdown signal, test time was about 13.705383 seconds 00:21:44.908 00:21:44.908 Latency(us) 00:21:44.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.908 =================================================================================================================== 00:21:44.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.908 22:29:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:44.908 22:29:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:44.908 22:29:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126806' 00:21:44.908 22:29:43 -- common/autotest_common.sh@945 -- # kill 126806 00:21:44.908 22:29:43 -- common/autotest_common.sh@950 -- # wait 126806 00:21:44.908 [2024-07-14 22:29:43.754763] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.908 [2024-07-14 22:29:43.902420] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.842 ************************************ 00:21:45.842 END TEST raid_rebuild_test_io 00:21:45.842 ************************************ 00:21:45.842 22:29:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:45.842 00:21:45.842 real 0m18.695s 00:21:45.842 user 0m28.906s 00:21:45.842 sys 0m1.883s 00:21:45.842 22:29:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.842 22:29:44 -- common/autotest_common.sh@10 -- # set +x 00:21:45.842 22:29:44 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:45.842 22:29:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:45.842 22:29:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:45.842 22:29:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.100 ************************************ 00:21:46.100 START TEST raid_rebuild_test_sb_io 00:21:46.100 ************************************ 00:21:46.100 22:29:44 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@544 -- # raid_pid=127340 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127340 /var/tmp/spdk-raid.sock 00:21:46.100 22:29:44 -- common/autotest_common.sh@819 -- # '[' -z 127340 ']' 00:21:46.100 22:29:44 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:46.100 22:29:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:46.100 22:29:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.100 22:29:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:46.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:46.100 22:29:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.100 22:29:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.100 [2024-07-14 22:29:44.987446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:46.100 [2024-07-14 22:29:44.987976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127340 ] 00:21:46.100 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:46.100 Zero copy mechanism will not be used. 00:21:46.100 [2024-07-14 22:29:45.157695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.420 [2024-07-14 22:29:45.387498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.691 [2024-07-14 22:29:45.593019] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.949 22:29:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:46.949 22:29:45 -- common/autotest_common.sh@852 -- # return 0 00:21:46.949 22:29:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:46.949 22:29:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:46.949 22:29:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:47.207 BaseBdev1_malloc 00:21:47.207 22:29:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:47.465 [2024-07-14 22:29:46.396705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:47.465 [2024-07-14 22:29:46.396809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.465 [2024-07-14 22:29:46.396846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:47.465 [2024-07-14 22:29:46.396895] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.466 [2024-07-14 22:29:46.399368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.466 [2024-07-14 22:29:46.399412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:47.466 BaseBdev1 00:21:47.466 22:29:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:47.466 22:29:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:47.466 22:29:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:47.724 BaseBdev2_malloc 00:21:47.724 22:29:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:47.981 [2024-07-14 22:29:46.844492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:47.981 [2024-07-14 22:29:46.844588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.981 [2024-07-14 22:29:46.844632] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:47.981 [2024-07-14 22:29:46.844683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.981 [2024-07-14 22:29:46.846904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.981 [2024-07-14 22:29:46.846950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:47.981 BaseBdev2 00:21:47.982 22:29:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:48.239 spare_malloc 00:21:48.239 22:29:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:48.497 spare_delay 00:21:48.497 22:29:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:48.755 [2024-07-14 22:29:47.582754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.755 [2024-07-14 22:29:47.582864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.755 [2024-07-14 22:29:47.582906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:48.755 [2024-07-14 22:29:47.582948] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.755 [2024-07-14 22:29:47.585244] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.755 [2024-07-14 22:29:47.585313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.755 spare 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:48.755 [2024-07-14 22:29:47.778831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.755 [2024-07-14 22:29:47.780760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.755 [2024-07-14 22:29:47.780974] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:48.755 [2024-07-14 22:29:47.780989] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:48.755 [2024-07-14 22:29:47.781135] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:48.755 [2024-07-14 22:29:47.781517] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:48.755 [2024-07-14 22:29:47.781548] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:48.755 [2024-07-14 22:29:47.781716] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.755 22:29:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.013 22:29:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.013 "name": "raid_bdev1", 00:21:49.013 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:49.013 "strip_size_kb": 0, 00:21:49.013 "state": "online", 00:21:49.013 "raid_level": "raid1", 00:21:49.013 "superblock": true, 00:21:49.013 "num_base_bdevs": 2, 00:21:49.013 "num_base_bdevs_discovered": 2, 00:21:49.013 "num_base_bdevs_operational": 2, 00:21:49.013 "base_bdevs_list": [ 00:21:49.013 { 00:21:49.013 "name": "BaseBdev1", 00:21:49.013 "uuid": "818f6b4b-23e1-59db-a361-b880e787e9c5", 00:21:49.013 "is_configured": true, 00:21:49.013 "data_offset": 2048, 00:21:49.013 "data_size": 63488 00:21:49.013 }, 00:21:49.013 { 00:21:49.013 "name": "BaseBdev2", 00:21:49.013 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:49.013 "is_configured": true, 00:21:49.013 "data_offset": 2048, 00:21:49.013 "data_size": 63488 00:21:49.013 } 00:21:49.013 ] 00:21:49.013 }' 00:21:49.013 22:29:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.013 22:29:48 -- common/autotest_common.sh@10 -- # set +x 00:21:49.581 22:29:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:49.581 22:29:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:49.840 [2024-07-14 22:29:48.871194] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.840 22:29:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:49.840 22:29:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.840 22:29:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:50.099 22:29:49 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:50.099 22:29:49 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:50.099 22:29:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:50.099 22:29:49 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:50.358 [2024-07-14 22:29:49.229765] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:50.358 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:50.358 Zero copy mechanism will not be used. 00:21:50.358 Running I/O for 60 seconds... 00:21:50.358 [2024-07-14 22:29:49.316521] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.358 [2024-07-14 22:29:49.316744] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.358 22:29:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.616 22:29:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.616 "name": "raid_bdev1", 00:21:50.616 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:50.616 "strip_size_kb": 0, 00:21:50.616 "state": "online", 00:21:50.616 "raid_level": "raid1", 00:21:50.616 "superblock": true, 00:21:50.616 "num_base_bdevs": 2, 00:21:50.616 "num_base_bdevs_discovered": 1, 00:21:50.616 "num_base_bdevs_operational": 1, 00:21:50.616 "base_bdevs_list": [ 00:21:50.616 { 00:21:50.616 "name": null, 00:21:50.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.616 "is_configured": false, 00:21:50.616 "data_offset": 2048, 00:21:50.616 "data_size": 63488 00:21:50.616 }, 00:21:50.616 { 00:21:50.616 "name": "BaseBdev2", 00:21:50.616 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:50.616 "is_configured": true, 00:21:50.616 "data_offset": 2048, 00:21:50.616 "data_size": 63488 00:21:50.616 } 00:21:50.616 ] 00:21:50.616 }' 00:21:50.616 22:29:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.616 22:29:49 -- common/autotest_common.sh@10 -- # set +x 00:21:51.182 22:29:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.440 [2024-07-14 22:29:50.412245] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:51.440 [2024-07-14 22:29:50.412320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.440 22:29:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:51.440 [2024-07-14 22:29:50.470263] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:51.440 [2024-07-14 22:29:50.472048] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.698 [2024-07-14 22:29:50.596250] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:51.956 [2024-07-14 22:29:50.811264] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:51.956 [2024-07-14 22:29:50.811529] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:52.215 [2024-07-14 22:29:51.177171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:52.472 [2024-07-14 22:29:51.305481] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.472 22:29:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.472 [2024-07-14 22:29:51.528729] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:52.730 22:29:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.730 "name": "raid_bdev1", 00:21:52.730 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:52.730 "strip_size_kb": 0, 00:21:52.730 "state": "online", 00:21:52.730 "raid_level": "raid1", 00:21:52.730 "superblock": true, 00:21:52.730 "num_base_bdevs": 2, 00:21:52.730 "num_base_bdevs_discovered": 2, 00:21:52.730 "num_base_bdevs_operational": 2, 00:21:52.730 "process": { 00:21:52.730 "type": "rebuild", 00:21:52.730 "target": "spare", 00:21:52.730 "progress": { 00:21:52.730 "blocks": 14336, 00:21:52.730 "percent": 22 00:21:52.730 } 00:21:52.730 }, 00:21:52.730 "base_bdevs_list": [ 00:21:52.730 { 00:21:52.730 "name": "spare", 00:21:52.730 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:21:52.730 "is_configured": true, 00:21:52.730 "data_offset": 2048, 00:21:52.730 "data_size": 63488 00:21:52.730 }, 00:21:52.730 { 00:21:52.730 "name": "BaseBdev2", 00:21:52.730 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:52.730 "is_configured": true, 00:21:52.730 "data_offset": 2048, 00:21:52.730 "data_size": 63488 00:21:52.730 } 00:21:52.730 ] 00:21:52.730 }' 00:21:52.730 22:29:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.730 [2024-07-14 22:29:51.731390] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:52.730 22:29:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.730 22:29:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.988 22:29:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.988 22:29:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:53.245 [2024-07-14 22:29:52.061218] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:53.245 [2024-07-14 22:29:52.073297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.245 [2024-07-14 22:29:52.169908] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:53.245 [2024-07-14 22:29:52.276077] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:53.245 [2024-07-14 22:29:52.290074] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.503 [2024-07-14 22:29:52.315689] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.503 22:29:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.503 "name": "raid_bdev1", 00:21:53.503 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:53.503 "strip_size_kb": 0, 00:21:53.503 "state": "online", 00:21:53.503 "raid_level": "raid1", 00:21:53.503 "superblock": true, 00:21:53.503 "num_base_bdevs": 2, 00:21:53.503 "num_base_bdevs_discovered": 1, 00:21:53.503 "num_base_bdevs_operational": 1, 00:21:53.503 "base_bdevs_list": [ 00:21:53.503 { 00:21:53.503 "name": null, 00:21:53.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.503 "is_configured": false, 00:21:53.503 "data_offset": 2048, 00:21:53.503 "data_size": 63488 00:21:53.503 }, 00:21:53.504 { 00:21:53.504 "name": "BaseBdev2", 00:21:53.504 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:53.504 "is_configured": true, 00:21:53.504 "data_offset": 2048, 00:21:53.504 "data_size": 63488 00:21:53.504 } 00:21:53.504 ] 00:21:53.504 }' 00:21:53.504 22:29:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.504 22:29:52 -- common/autotest_common.sh@10 -- # set +x 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.438 "name": "raid_bdev1", 00:21:54.438 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:54.438 "strip_size_kb": 0, 00:21:54.438 "state": "online", 00:21:54.438 "raid_level": "raid1", 00:21:54.438 "superblock": true, 00:21:54.438 "num_base_bdevs": 2, 00:21:54.438 "num_base_bdevs_discovered": 1, 00:21:54.438 "num_base_bdevs_operational": 1, 00:21:54.438 "base_bdevs_list": [ 00:21:54.438 { 00:21:54.438 "name": null, 00:21:54.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.438 "is_configured": false, 00:21:54.438 "data_offset": 2048, 00:21:54.438 "data_size": 63488 00:21:54.438 }, 00:21:54.438 { 00:21:54.438 "name": "BaseBdev2", 00:21:54.438 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:54.438 "is_configured": true, 00:21:54.438 "data_offset": 2048, 00:21:54.438 "data_size": 63488 00:21:54.438 } 00:21:54.438 ] 00:21:54.438 }' 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.438 22:29:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.696 22:29:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:54.696 22:29:53 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:54.954 [2024-07-14 22:29:53.779296] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:54.954 [2024-07-14 22:29:53.779342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.954 22:29:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:54.954 [2024-07-14 22:29:53.813135] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:54.954 [2024-07-14 22:29:53.814709] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.954 [2024-07-14 22:29:53.923261] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:54.954 [2024-07-14 22:29:53.923619] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:55.212 [2024-07-14 22:29:54.148807] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:55.212 [2024-07-14 22:29:54.148971] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:55.470 [2024-07-14 22:29:54.484240] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:55.470 [2024-07-14 22:29:54.484590] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:55.728 [2024-07-14 22:29:54.685727] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:55.728 [2024-07-14 22:29:54.685871] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.986 22:29:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.986 [2024-07-14 22:29:55.014240] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:55.986 [2024-07-14 22:29:55.014631] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:56.245 "name": "raid_bdev1", 00:21:56.245 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:56.245 "strip_size_kb": 0, 00:21:56.245 "state": "online", 00:21:56.245 "raid_level": "raid1", 00:21:56.245 "superblock": true, 00:21:56.245 "num_base_bdevs": 2, 00:21:56.245 "num_base_bdevs_discovered": 2, 00:21:56.245 "num_base_bdevs_operational": 2, 00:21:56.245 "process": { 00:21:56.245 "type": "rebuild", 00:21:56.245 "target": "spare", 00:21:56.245 "progress": { 00:21:56.245 "blocks": 14336, 00:21:56.245 "percent": 22 00:21:56.245 } 00:21:56.245 }, 00:21:56.245 "base_bdevs_list": [ 00:21:56.245 { 00:21:56.245 "name": "spare", 00:21:56.245 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:21:56.245 "is_configured": true, 00:21:56.245 "data_offset": 2048, 00:21:56.245 "data_size": 63488 00:21:56.245 }, 00:21:56.245 { 00:21:56.245 "name": "BaseBdev2", 00:21:56.245 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:56.245 "is_configured": true, 00:21:56.245 "data_offset": 2048, 00:21:56.245 "data_size": 63488 00:21:56.245 } 00:21:56.245 ] 00:21:56.245 }' 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:56.245 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@657 -- # local timeout=452 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.245 22:29:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.503 22:29:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:56.503 "name": "raid_bdev1", 00:21:56.503 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:56.503 "strip_size_kb": 0, 00:21:56.503 "state": "online", 00:21:56.503 "raid_level": "raid1", 00:21:56.503 "superblock": true, 00:21:56.503 "num_base_bdevs": 2, 00:21:56.503 "num_base_bdevs_discovered": 2, 00:21:56.503 "num_base_bdevs_operational": 2, 00:21:56.503 "process": { 00:21:56.503 "type": "rebuild", 00:21:56.503 "target": "spare", 00:21:56.503 "progress": { 00:21:56.503 "blocks": 18432, 00:21:56.503 "percent": 29 00:21:56.503 } 00:21:56.503 }, 00:21:56.503 "base_bdevs_list": [ 00:21:56.503 { 00:21:56.503 "name": "spare", 00:21:56.503 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:21:56.503 "is_configured": true, 00:21:56.503 "data_offset": 2048, 00:21:56.503 "data_size": 63488 00:21:56.503 }, 00:21:56.503 { 00:21:56.503 "name": "BaseBdev2", 00:21:56.503 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:56.503 "is_configured": true, 00:21:56.503 "data_offset": 2048, 00:21:56.503 "data_size": 63488 00:21:56.503 } 00:21:56.503 ] 00:21:56.503 }' 00:21:56.503 22:29:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:56.503 [2024-07-14 22:29:55.401007] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:56.503 22:29:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.503 22:29:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:56.503 22:29:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.503 22:29:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:56.760 [2024-07-14 22:29:55.628944] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:57.325 [2024-07-14 22:29:56.321231] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:57.325 [2024-07-14 22:29:56.321459] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.583 22:29:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.583 [2024-07-14 22:29:56.558827] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:57.840 [2024-07-14 22:29:56.671409] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:57.840 22:29:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.840 "name": "raid_bdev1", 00:21:57.840 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:57.840 "strip_size_kb": 0, 00:21:57.840 "state": "online", 00:21:57.840 "raid_level": "raid1", 00:21:57.840 "superblock": true, 00:21:57.840 "num_base_bdevs": 2, 00:21:57.840 "num_base_bdevs_discovered": 2, 00:21:57.840 "num_base_bdevs_operational": 2, 00:21:57.840 "process": { 00:21:57.840 "type": "rebuild", 00:21:57.840 "target": "spare", 00:21:57.840 "progress": { 00:21:57.840 "blocks": 40960, 00:21:57.840 "percent": 64 00:21:57.840 } 00:21:57.840 }, 00:21:57.840 "base_bdevs_list": [ 00:21:57.840 { 00:21:57.840 "name": "spare", 00:21:57.840 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:21:57.840 "is_configured": true, 00:21:57.840 "data_offset": 2048, 00:21:57.840 "data_size": 63488 00:21:57.840 }, 00:21:57.840 { 00:21:57.840 "name": "BaseBdev2", 00:21:57.840 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:57.840 "is_configured": true, 00:21:57.840 "data_offset": 2048, 00:21:57.840 "data_size": 63488 00:21:57.840 } 00:21:57.840 ] 00:21:57.840 }' 00:21:57.840 22:29:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.840 22:29:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.840 22:29:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.840 22:29:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.840 22:29:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:58.097 [2024-07-14 22:29:56.975169] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:58.354 [2024-07-14 22:29:57.189321] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.918 22:29:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.918 [2024-07-14 22:29:57.838398] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:59.175 22:29:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:59.175 "name": "raid_bdev1", 00:21:59.175 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:21:59.175 "strip_size_kb": 0, 00:21:59.175 "state": "online", 00:21:59.175 "raid_level": "raid1", 00:21:59.175 "superblock": true, 00:21:59.175 "num_base_bdevs": 2, 00:21:59.175 "num_base_bdevs_discovered": 2, 00:21:59.175 "num_base_bdevs_operational": 2, 00:21:59.175 "process": { 00:21:59.175 "type": "rebuild", 00:21:59.175 "target": "spare", 00:21:59.175 "progress": { 00:21:59.175 "blocks": 59392, 00:21:59.175 "percent": 93 00:21:59.175 } 00:21:59.175 }, 00:21:59.175 "base_bdevs_list": [ 00:21:59.175 { 00:21:59.175 "name": "spare", 00:21:59.175 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:21:59.175 "is_configured": true, 00:21:59.175 "data_offset": 2048, 00:21:59.175 "data_size": 63488 00:21:59.175 }, 00:21:59.175 { 00:21:59.175 "name": "BaseBdev2", 00:21:59.175 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:21:59.175 "is_configured": true, 00:21:59.175 "data_offset": 2048, 00:21:59.175 "data_size": 63488 00:21:59.175 } 00:21:59.175 ] 00:21:59.175 }' 00:21:59.175 22:29:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:59.175 22:29:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.175 22:29:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:59.175 [2024-07-14 22:29:58.178167] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:59.175 22:29:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.175 22:29:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:59.433 [2024-07-14 22:29:58.278265] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:59.433 [2024-07-14 22:29:58.286588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.367 22:29:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.626 "name": "raid_bdev1", 00:22:00.626 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:22:00.626 "strip_size_kb": 0, 00:22:00.626 "state": "online", 00:22:00.626 "raid_level": "raid1", 00:22:00.626 "superblock": true, 00:22:00.626 "num_base_bdevs": 2, 00:22:00.626 "num_base_bdevs_discovered": 2, 00:22:00.626 "num_base_bdevs_operational": 2, 00:22:00.626 "base_bdevs_list": [ 00:22:00.626 { 00:22:00.626 "name": "spare", 00:22:00.626 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:22:00.626 "is_configured": true, 00:22:00.626 "data_offset": 2048, 00:22:00.626 "data_size": 63488 00:22:00.626 }, 00:22:00.626 { 00:22:00.626 "name": "BaseBdev2", 00:22:00.626 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:22:00.626 "is_configured": true, 00:22:00.626 "data_offset": 2048, 00:22:00.626 "data_size": 63488 00:22:00.626 } 00:22:00.626 ] 00:22:00.626 }' 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@660 -- # break 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.626 22:29:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.884 22:29:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.884 "name": "raid_bdev1", 00:22:00.884 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:22:00.884 "strip_size_kb": 0, 00:22:00.884 "state": "online", 00:22:00.884 "raid_level": "raid1", 00:22:00.884 "superblock": true, 00:22:00.884 "num_base_bdevs": 2, 00:22:00.884 "num_base_bdevs_discovered": 2, 00:22:00.884 "num_base_bdevs_operational": 2, 00:22:00.884 "base_bdevs_list": [ 00:22:00.884 { 00:22:00.884 "name": "spare", 00:22:00.884 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:22:00.884 "is_configured": true, 00:22:00.884 "data_offset": 2048, 00:22:00.884 "data_size": 63488 00:22:00.884 }, 00:22:00.884 { 00:22:00.884 "name": "BaseBdev2", 00:22:00.884 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:22:00.884 "is_configured": true, 00:22:00.885 "data_offset": 2048, 00:22:00.885 "data_size": 63488 00:22:00.885 } 00:22:00.885 ] 00:22:00.885 }' 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.885 22:29:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.143 22:30:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.143 "name": "raid_bdev1", 00:22:01.143 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:22:01.143 "strip_size_kb": 0, 00:22:01.143 "state": "online", 00:22:01.143 "raid_level": "raid1", 00:22:01.143 "superblock": true, 00:22:01.143 "num_base_bdevs": 2, 00:22:01.143 "num_base_bdevs_discovered": 2, 00:22:01.143 "num_base_bdevs_operational": 2, 00:22:01.143 "base_bdevs_list": [ 00:22:01.143 { 00:22:01.143 "name": "spare", 00:22:01.143 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:22:01.143 "is_configured": true, 00:22:01.143 "data_offset": 2048, 00:22:01.143 "data_size": 63488 00:22:01.143 }, 00:22:01.143 { 00:22:01.143 "name": "BaseBdev2", 00:22:01.143 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:22:01.143 "is_configured": true, 00:22:01.143 "data_offset": 2048, 00:22:01.143 "data_size": 63488 00:22:01.143 } 00:22:01.143 ] 00:22:01.143 }' 00:22:01.143 22:30:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.143 22:30:00 -- common/autotest_common.sh@10 -- # set +x 00:22:02.101 22:30:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:02.101 [2024-07-14 22:30:01.103023] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.101 [2024-07-14 22:30:01.103060] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.357 00:22:02.357 Latency(us) 00:22:02.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.357 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:02.357 raid_bdev1 : 11.96 112.68 338.05 0.00 0.00 12115.04 305.34 113436.86 00:22:02.357 =================================================================================================================== 00:22:02.357 Total : 112.68 338.05 0.00 0.00 12115.04 305.34 113436.86 00:22:02.357 [2024-07-14 22:30:01.209669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.357 [2024-07-14 22:30:01.209706] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.357 [2024-07-14 22:30:01.209784] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.357 0 00:22:02.357 [2024-07-14 22:30:01.209796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:02.357 22:30:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.357 22:30:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:02.613 22:30:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:02.613 22:30:01 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:02.613 22:30:01 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@12 -- # local i 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:02.613 22:30:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:02.870 /dev/nbd0 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:02.870 22:30:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:02.870 22:30:01 -- common/autotest_common.sh@857 -- # local i 00:22:02.870 22:30:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:02.870 22:30:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:02.870 22:30:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:02.870 22:30:01 -- common/autotest_common.sh@861 -- # break 00:22:02.870 22:30:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:02.870 22:30:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:02.870 22:30:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:02.870 1+0 records in 00:22:02.870 1+0 records out 00:22:02.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587071 s, 7.0 MB/s 00:22:02.870 22:30:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:02.870 22:30:01 -- common/autotest_common.sh@874 -- # size=4096 00:22:02.870 22:30:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:02.870 22:30:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:02.870 22:30:01 -- common/autotest_common.sh@877 -- # return 0 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:02.870 22:30:01 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:02.870 22:30:01 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:22:02.870 22:30:01 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@12 -- # local i 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:02.870 22:30:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:03.128 /dev/nbd1 00:22:03.128 22:30:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:03.128 22:30:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:03.128 22:30:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:03.128 22:30:02 -- common/autotest_common.sh@857 -- # local i 00:22:03.128 22:30:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:03.128 22:30:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:03.128 22:30:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:03.128 22:30:02 -- common/autotest_common.sh@861 -- # break 00:22:03.128 22:30:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:03.128 22:30:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:03.128 22:30:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:03.128 1+0 records in 00:22:03.128 1+0 records out 00:22:03.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536363 s, 7.6 MB/s 00:22:03.128 22:30:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.128 22:30:02 -- common/autotest_common.sh@874 -- # size=4096 00:22:03.128 22:30:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.128 22:30:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:03.128 22:30:02 -- common/autotest_common.sh@877 -- # return 0 00:22:03.128 22:30:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:03.128 22:30:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.128 22:30:02 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:03.386 22:30:02 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:03.386 22:30:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:03.386 22:30:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:03.386 22:30:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:03.386 22:30:02 -- bdev/nbd_common.sh@51 -- # local i 00:22:03.386 22:30:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.386 22:30:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:03.645 22:30:02 -- bdev/nbd_common.sh@41 -- # break 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.646 22:30:02 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@51 -- # local i 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.646 22:30:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@41 -- # break 00:22:03.904 22:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.904 22:30:02 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:03.904 22:30:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:03.904 22:30:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:03.904 22:30:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:04.161 22:30:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:04.419 [2024-07-14 22:30:03.441145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:04.419 [2024-07-14 22:30:03.441239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.419 [2024-07-14 22:30:03.441276] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:04.419 [2024-07-14 22:30:03.441303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.419 [2024-07-14 22:30:03.443442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.419 [2024-07-14 22:30:03.443504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:04.419 [2024-07-14 22:30:03.443621] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:04.419 [2024-07-14 22:30:03.443686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:04.419 BaseBdev1 00:22:04.419 22:30:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:04.419 22:30:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:04.419 22:30:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:04.677 22:30:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:04.935 [2024-07-14 22:30:03.873278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:04.935 [2024-07-14 22:30:03.873346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.935 [2024-07-14 22:30:03.873378] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:04.935 [2024-07-14 22:30:03.873399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.935 [2024-07-14 22:30:03.873754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.935 [2024-07-14 22:30:03.873804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:04.935 [2024-07-14 22:30:03.873887] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:04.935 [2024-07-14 22:30:03.873901] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:04.935 [2024-07-14 22:30:03.873908] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.935 [2024-07-14 22:30:03.873930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:22:04.935 [2024-07-14 22:30:03.873989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:04.935 BaseBdev2 00:22:04.935 22:30:03 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:05.193 22:30:04 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:05.451 [2024-07-14 22:30:04.261558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:05.451 [2024-07-14 22:30:04.261652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.451 [2024-07-14 22:30:04.261718] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:05.451 [2024-07-14 22:30:04.261757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.451 [2024-07-14 22:30:04.262485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.451 [2024-07-14 22:30:04.262559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:05.451 [2024-07-14 22:30:04.262705] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:05.451 [2024-07-14 22:30:04.262738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:05.451 spare 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.451 [2024-07-14 22:30:04.362896] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:22:05.451 [2024-07-14 22:30:04.362917] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:05.451 [2024-07-14 22:30:04.363094] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cee0 00:22:05.451 [2024-07-14 22:30:04.363453] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:22:05.451 [2024-07-14 22:30:04.363488] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:22:05.451 [2024-07-14 22:30:04.363630] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.451 "name": "raid_bdev1", 00:22:05.451 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:22:05.451 "strip_size_kb": 0, 00:22:05.451 "state": "online", 00:22:05.451 "raid_level": "raid1", 00:22:05.451 "superblock": true, 00:22:05.451 "num_base_bdevs": 2, 00:22:05.451 "num_base_bdevs_discovered": 2, 00:22:05.451 "num_base_bdevs_operational": 2, 00:22:05.451 "base_bdevs_list": [ 00:22:05.451 { 00:22:05.451 "name": "spare", 00:22:05.451 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:22:05.451 "is_configured": true, 00:22:05.451 "data_offset": 2048, 00:22:05.451 "data_size": 63488 00:22:05.451 }, 00:22:05.451 { 00:22:05.451 "name": "BaseBdev2", 00:22:05.451 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:22:05.451 "is_configured": true, 00:22:05.451 "data_offset": 2048, 00:22:05.451 "data_size": 63488 00:22:05.451 } 00:22:05.451 ] 00:22:05.451 }' 00:22:05.451 22:30:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.451 22:30:04 -- common/autotest_common.sh@10 -- # set +x 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:06.385 "name": "raid_bdev1", 00:22:06.385 "uuid": "6772dee4-5f79-4a3a-a3ff-747ec27d1014", 00:22:06.385 "strip_size_kb": 0, 00:22:06.385 "state": "online", 00:22:06.385 "raid_level": "raid1", 00:22:06.385 "superblock": true, 00:22:06.385 "num_base_bdevs": 2, 00:22:06.385 "num_base_bdevs_discovered": 2, 00:22:06.385 "num_base_bdevs_operational": 2, 00:22:06.385 "base_bdevs_list": [ 00:22:06.385 { 00:22:06.385 "name": "spare", 00:22:06.385 "uuid": "7d39ac50-9f29-589c-a87a-d4c3b73207b4", 00:22:06.385 "is_configured": true, 00:22:06.385 "data_offset": 2048, 00:22:06.385 "data_size": 63488 00:22:06.385 }, 00:22:06.385 { 00:22:06.385 "name": "BaseBdev2", 00:22:06.385 "uuid": "f7f625c3-ec65-5a13-b70a-ef914b015888", 00:22:06.385 "is_configured": true, 00:22:06.385 "data_offset": 2048, 00:22:06.385 "data_size": 63488 00:22:06.385 } 00:22:06.385 ] 00:22:06.385 }' 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.385 22:30:05 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:06.643 22:30:05 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.643 22:30:05 -- bdev/bdev_raid.sh@709 -- # killprocess 127340 00:22:06.643 22:30:05 -- common/autotest_common.sh@926 -- # '[' -z 127340 ']' 00:22:06.643 22:30:05 -- common/autotest_common.sh@930 -- # kill -0 127340 00:22:06.643 22:30:05 -- common/autotest_common.sh@931 -- # uname 00:22:06.643 22:30:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:06.643 22:30:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127340 00:22:06.901 killing process with pid 127340 00:22:06.901 Received shutdown signal, test time was about 16.486843 seconds 00:22:06.901 00:22:06.901 Latency(us) 00:22:06.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.901 =================================================================================================================== 00:22:06.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.902 22:30:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:06.902 22:30:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:06.902 22:30:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127340' 00:22:06.902 22:30:05 -- common/autotest_common.sh@945 -- # kill 127340 00:22:06.902 22:30:05 -- common/autotest_common.sh@950 -- # wait 127340 00:22:06.902 [2024-07-14 22:30:05.719148] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:06.902 [2024-07-14 22:30:05.719286] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.902 [2024-07-14 22:30:05.719417] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.902 [2024-07-14 22:30:05.719434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:22:06.902 [2024-07-14 22:30:05.885784] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:08.298 ************************************ 00:22:08.298 END TEST raid_rebuild_test_sb_io 00:22:08.298 ************************************ 00:22:08.298 22:30:07 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:08.298 00:22:08.298 real 0m22.118s 00:22:08.298 user 0m35.084s 00:22:08.298 sys 0m2.301s 00:22:08.299 22:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.299 22:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:22:08.299 22:30:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:08.299 22:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.299 22:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:08.299 ************************************ 00:22:08.299 START TEST raid_rebuild_test 00:22:08.299 ************************************ 00:22:08.299 22:30:07 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@544 -- # raid_pid=127952 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127952 /var/tmp/spdk-raid.sock 00:22:08.299 22:30:07 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:08.299 22:30:07 -- common/autotest_common.sh@819 -- # '[' -z 127952 ']' 00:22:08.299 22:30:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:08.299 22:30:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:08.299 22:30:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:08.299 22:30:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.299 22:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:08.299 [2024-07-14 22:30:07.166127] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:08.299 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:08.299 Zero copy mechanism will not be used. 00:22:08.299 [2024-07-14 22:30:07.166326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127952 ] 00:22:08.299 [2024-07-14 22:30:07.336425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.557 [2024-07-14 22:30:07.547394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.815 [2024-07-14 22:30:07.756299] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:09.073 22:30:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.073 22:30:08 -- common/autotest_common.sh@852 -- # return 0 00:22:09.073 22:30:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.073 22:30:08 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:09.073 22:30:08 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:09.331 BaseBdev1 00:22:09.331 22:30:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.331 22:30:08 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:09.331 22:30:08 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.589 BaseBdev2 00:22:09.589 22:30:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.589 22:30:08 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:09.589 22:30:08 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.848 BaseBdev3 00:22:09.848 22:30:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.848 22:30:08 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:09.848 22:30:08 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:10.106 BaseBdev4 00:22:10.106 22:30:09 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:10.365 spare_malloc 00:22:10.365 22:30:09 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:10.623 spare_delay 00:22:10.623 22:30:09 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:10.882 [2024-07-14 22:30:09.702553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:10.882 [2024-07-14 22:30:09.702661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.882 [2024-07-14 22:30:09.702715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:10.882 [2024-07-14 22:30:09.702762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.882 [2024-07-14 22:30:09.705028] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.882 [2024-07-14 22:30:09.705072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:10.882 spare 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:10.882 [2024-07-14 22:30:09.886569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.882 [2024-07-14 22:30:09.888579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.882 [2024-07-14 22:30:09.888631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:10.882 [2024-07-14 22:30:09.888668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:10.882 [2024-07-14 22:30:09.888738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:10.882 [2024-07-14 22:30:09.888750] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:10.882 [2024-07-14 22:30:09.888895] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:10.882 [2024-07-14 22:30:09.889224] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:10.882 [2024-07-14 22:30:09.889246] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:10.882 [2024-07-14 22:30:09.889394] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.882 22:30:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.140 22:30:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.140 "name": "raid_bdev1", 00:22:11.140 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:11.140 "strip_size_kb": 0, 00:22:11.140 "state": "online", 00:22:11.140 "raid_level": "raid1", 00:22:11.140 "superblock": false, 00:22:11.140 "num_base_bdevs": 4, 00:22:11.140 "num_base_bdevs_discovered": 4, 00:22:11.140 "num_base_bdevs_operational": 4, 00:22:11.140 "base_bdevs_list": [ 00:22:11.140 { 00:22:11.140 "name": "BaseBdev1", 00:22:11.140 "uuid": "cfcd17cf-0e94-4262-914a-0214f1bc1fc0", 00:22:11.140 "is_configured": true, 00:22:11.140 "data_offset": 0, 00:22:11.140 "data_size": 65536 00:22:11.140 }, 00:22:11.140 { 00:22:11.141 "name": "BaseBdev2", 00:22:11.141 "uuid": "84dea39a-cbe4-4400-b47d-d972c90278c2", 00:22:11.141 "is_configured": true, 00:22:11.141 "data_offset": 0, 00:22:11.141 "data_size": 65536 00:22:11.141 }, 00:22:11.141 { 00:22:11.141 "name": "BaseBdev3", 00:22:11.141 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:11.141 "is_configured": true, 00:22:11.141 "data_offset": 0, 00:22:11.141 "data_size": 65536 00:22:11.141 }, 00:22:11.141 { 00:22:11.141 "name": "BaseBdev4", 00:22:11.141 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:11.141 "is_configured": true, 00:22:11.141 "data_offset": 0, 00:22:11.141 "data_size": 65536 00:22:11.141 } 00:22:11.141 ] 00:22:11.141 }' 00:22:11.141 22:30:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.141 22:30:10 -- common/autotest_common.sh@10 -- # set +x 00:22:11.706 22:30:10 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:11.706 22:30:10 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:11.967 [2024-07-14 22:30:11.003064] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.967 22:30:11 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:11.967 22:30:11 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:11.967 22:30:11 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.225 22:30:11 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:12.225 22:30:11 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:12.225 22:30:11 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:12.225 22:30:11 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@12 -- # local i 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:12.225 22:30:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:12.483 [2024-07-14 22:30:11.442914] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:12.483 /dev/nbd0 00:22:12.483 22:30:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:12.483 22:30:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:12.483 22:30:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:12.483 22:30:11 -- common/autotest_common.sh@857 -- # local i 00:22:12.483 22:30:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:12.483 22:30:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:12.483 22:30:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:12.483 22:30:11 -- common/autotest_common.sh@861 -- # break 00:22:12.483 22:30:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:12.483 22:30:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:12.483 22:30:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:12.483 1+0 records in 00:22:12.483 1+0 records out 00:22:12.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278232 s, 14.7 MB/s 00:22:12.483 22:30:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.483 22:30:11 -- common/autotest_common.sh@874 -- # size=4096 00:22:12.483 22:30:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.483 22:30:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:12.483 22:30:11 -- common/autotest_common.sh@877 -- # return 0 00:22:12.483 22:30:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:12.483 22:30:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:12.483 22:30:11 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:12.484 22:30:11 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:12.484 22:30:11 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:19.101 65536+0 records in 00:22:19.101 65536+0 records out 00:22:19.101 33554432 bytes (34 MB, 32 MiB) copied, 6.01743 s, 5.6 MB/s 00:22:19.101 22:30:17 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@51 -- # local i 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:19.101 [2024-07-14 22:30:17.763163] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@41 -- # break 00:22:19.101 22:30:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.101 22:30:17 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:19.101 [2024-07-14 22:30:18.094846] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.101 22:30:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.360 22:30:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.360 "name": "raid_bdev1", 00:22:19.360 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:19.360 "strip_size_kb": 0, 00:22:19.360 "state": "online", 00:22:19.360 "raid_level": "raid1", 00:22:19.360 "superblock": false, 00:22:19.360 "num_base_bdevs": 4, 00:22:19.360 "num_base_bdevs_discovered": 3, 00:22:19.360 "num_base_bdevs_operational": 3, 00:22:19.360 "base_bdevs_list": [ 00:22:19.360 { 00:22:19.360 "name": null, 00:22:19.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.360 "is_configured": false, 00:22:19.360 "data_offset": 0, 00:22:19.360 "data_size": 65536 00:22:19.360 }, 00:22:19.360 { 00:22:19.360 "name": "BaseBdev2", 00:22:19.360 "uuid": "84dea39a-cbe4-4400-b47d-d972c90278c2", 00:22:19.360 "is_configured": true, 00:22:19.360 "data_offset": 0, 00:22:19.360 "data_size": 65536 00:22:19.360 }, 00:22:19.360 { 00:22:19.360 "name": "BaseBdev3", 00:22:19.360 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:19.360 "is_configured": true, 00:22:19.360 "data_offset": 0, 00:22:19.360 "data_size": 65536 00:22:19.360 }, 00:22:19.360 { 00:22:19.360 "name": "BaseBdev4", 00:22:19.360 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:19.360 "is_configured": true, 00:22:19.360 "data_offset": 0, 00:22:19.360 "data_size": 65536 00:22:19.360 } 00:22:19.360 ] 00:22:19.360 }' 00:22:19.360 22:30:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.360 22:30:18 -- common/autotest_common.sh@10 -- # set +x 00:22:19.926 22:30:18 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:20.184 [2024-07-14 22:30:19.164494] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:20.184 [2024-07-14 22:30:19.164574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:20.184 [2024-07-14 22:30:19.176843] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:22:20.184 [2024-07-14 22:30:19.178784] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:20.184 22:30:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.557 "name": "raid_bdev1", 00:22:21.557 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:21.557 "strip_size_kb": 0, 00:22:21.557 "state": "online", 00:22:21.557 "raid_level": "raid1", 00:22:21.557 "superblock": false, 00:22:21.557 "num_base_bdevs": 4, 00:22:21.557 "num_base_bdevs_discovered": 4, 00:22:21.557 "num_base_bdevs_operational": 4, 00:22:21.557 "process": { 00:22:21.557 "type": "rebuild", 00:22:21.557 "target": "spare", 00:22:21.557 "progress": { 00:22:21.557 "blocks": 24576, 00:22:21.557 "percent": 37 00:22:21.557 } 00:22:21.557 }, 00:22:21.557 "base_bdevs_list": [ 00:22:21.557 { 00:22:21.557 "name": "spare", 00:22:21.557 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:21.557 "is_configured": true, 00:22:21.557 "data_offset": 0, 00:22:21.557 "data_size": 65536 00:22:21.557 }, 00:22:21.557 { 00:22:21.557 "name": "BaseBdev2", 00:22:21.557 "uuid": "84dea39a-cbe4-4400-b47d-d972c90278c2", 00:22:21.557 "is_configured": true, 00:22:21.557 "data_offset": 0, 00:22:21.557 "data_size": 65536 00:22:21.557 }, 00:22:21.557 { 00:22:21.557 "name": "BaseBdev3", 00:22:21.557 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:21.557 "is_configured": true, 00:22:21.557 "data_offset": 0, 00:22:21.557 "data_size": 65536 00:22:21.557 }, 00:22:21.557 { 00:22:21.557 "name": "BaseBdev4", 00:22:21.557 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:21.557 "is_configured": true, 00:22:21.557 "data_offset": 0, 00:22:21.557 "data_size": 65536 00:22:21.557 } 00:22:21.557 ] 00:22:21.557 }' 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:21.557 22:30:20 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:21.814 [2024-07-14 22:30:20.769223] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:21.814 [2024-07-14 22:30:20.789726] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:21.814 [2024-07-14 22:30:20.789904] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.814 22:30:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.072 22:30:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.072 "name": "raid_bdev1", 00:22:22.072 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:22.072 "strip_size_kb": 0, 00:22:22.072 "state": "online", 00:22:22.072 "raid_level": "raid1", 00:22:22.072 "superblock": false, 00:22:22.072 "num_base_bdevs": 4, 00:22:22.072 "num_base_bdevs_discovered": 3, 00:22:22.072 "num_base_bdevs_operational": 3, 00:22:22.072 "base_bdevs_list": [ 00:22:22.072 { 00:22:22.072 "name": null, 00:22:22.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.072 "is_configured": false, 00:22:22.072 "data_offset": 0, 00:22:22.072 "data_size": 65536 00:22:22.072 }, 00:22:22.072 { 00:22:22.072 "name": "BaseBdev2", 00:22:22.072 "uuid": "84dea39a-cbe4-4400-b47d-d972c90278c2", 00:22:22.072 "is_configured": true, 00:22:22.072 "data_offset": 0, 00:22:22.072 "data_size": 65536 00:22:22.072 }, 00:22:22.072 { 00:22:22.072 "name": "BaseBdev3", 00:22:22.072 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:22.072 "is_configured": true, 00:22:22.072 "data_offset": 0, 00:22:22.072 "data_size": 65536 00:22:22.072 }, 00:22:22.072 { 00:22:22.072 "name": "BaseBdev4", 00:22:22.072 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:22.072 "is_configured": true, 00:22:22.072 "data_offset": 0, 00:22:22.072 "data_size": 65536 00:22:22.072 } 00:22:22.072 ] 00:22:22.072 }' 00:22:22.072 22:30:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.072 22:30:21 -- common/autotest_common.sh@10 -- # set +x 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.638 22:30:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.896 22:30:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:22.896 "name": "raid_bdev1", 00:22:22.896 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:22.896 "strip_size_kb": 0, 00:22:22.896 "state": "online", 00:22:22.896 "raid_level": "raid1", 00:22:22.896 "superblock": false, 00:22:22.896 "num_base_bdevs": 4, 00:22:22.896 "num_base_bdevs_discovered": 3, 00:22:22.896 "num_base_bdevs_operational": 3, 00:22:22.896 "base_bdevs_list": [ 00:22:22.896 { 00:22:22.896 "name": null, 00:22:22.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.896 "is_configured": false, 00:22:22.896 "data_offset": 0, 00:22:22.896 "data_size": 65536 00:22:22.896 }, 00:22:22.896 { 00:22:22.896 "name": "BaseBdev2", 00:22:22.896 "uuid": "84dea39a-cbe4-4400-b47d-d972c90278c2", 00:22:22.896 "is_configured": true, 00:22:22.896 "data_offset": 0, 00:22:22.896 "data_size": 65536 00:22:22.896 }, 00:22:22.896 { 00:22:22.896 "name": "BaseBdev3", 00:22:22.896 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:22.896 "is_configured": true, 00:22:22.896 "data_offset": 0, 00:22:22.896 "data_size": 65536 00:22:22.896 }, 00:22:22.896 { 00:22:22.896 "name": "BaseBdev4", 00:22:22.897 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:22.897 "is_configured": true, 00:22:22.897 "data_offset": 0, 00:22:22.897 "data_size": 65536 00:22:22.897 } 00:22:22.897 ] 00:22:22.897 }' 00:22:22.897 22:30:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:22.897 22:30:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:22.897 22:30:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.155 22:30:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:23.155 22:30:21 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.155 [2024-07-14 22:30:22.204657] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:23.155 [2024-07-14 22:30:22.204719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.155 [2024-07-14 22:30:22.217542] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:22:23.155 [2024-07-14 22:30:22.220086] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:23.413 22:30:22 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:24.373 22:30:23 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.374 22:30:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:24.374 22:30:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:24.374 22:30:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:24.374 22:30:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:24.374 22:30:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.374 22:30:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.634 22:30:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.634 "name": "raid_bdev1", 00:22:24.634 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:24.634 "strip_size_kb": 0, 00:22:24.634 "state": "online", 00:22:24.634 "raid_level": "raid1", 00:22:24.634 "superblock": false, 00:22:24.634 "num_base_bdevs": 4, 00:22:24.634 "num_base_bdevs_discovered": 4, 00:22:24.634 "num_base_bdevs_operational": 4, 00:22:24.634 "process": { 00:22:24.634 "type": "rebuild", 00:22:24.634 "target": "spare", 00:22:24.634 "progress": { 00:22:24.634 "blocks": 22528, 00:22:24.634 "percent": 34 00:22:24.634 } 00:22:24.634 }, 00:22:24.634 "base_bdevs_list": [ 00:22:24.634 { 00:22:24.634 "name": "spare", 00:22:24.634 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:24.634 "is_configured": true, 00:22:24.634 "data_offset": 0, 00:22:24.634 "data_size": 65536 00:22:24.634 }, 00:22:24.634 { 00:22:24.634 "name": "BaseBdev2", 00:22:24.634 "uuid": "84dea39a-cbe4-4400-b47d-d972c90278c2", 00:22:24.634 "is_configured": true, 00:22:24.634 "data_offset": 0, 00:22:24.634 "data_size": 65536 00:22:24.634 }, 00:22:24.634 { 00:22:24.634 "name": "BaseBdev3", 00:22:24.635 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:24.635 "is_configured": true, 00:22:24.635 "data_offset": 0, 00:22:24.635 "data_size": 65536 00:22:24.635 }, 00:22:24.635 { 00:22:24.635 "name": "BaseBdev4", 00:22:24.635 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:24.635 "is_configured": true, 00:22:24.635 "data_offset": 0, 00:22:24.635 "data_size": 65536 00:22:24.635 } 00:22:24.635 ] 00:22:24.635 }' 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:24.635 22:30:23 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:24.891 [2024-07-14 22:30:23.782168] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:24.891 [2024-07-14 22:30:23.830677] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.891 22:30:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.148 "name": "raid_bdev1", 00:22:25.148 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:25.148 "strip_size_kb": 0, 00:22:25.148 "state": "online", 00:22:25.148 "raid_level": "raid1", 00:22:25.148 "superblock": false, 00:22:25.148 "num_base_bdevs": 4, 00:22:25.148 "num_base_bdevs_discovered": 3, 00:22:25.148 "num_base_bdevs_operational": 3, 00:22:25.148 "process": { 00:22:25.148 "type": "rebuild", 00:22:25.148 "target": "spare", 00:22:25.148 "progress": { 00:22:25.148 "blocks": 36864, 00:22:25.148 "percent": 56 00:22:25.148 } 00:22:25.148 }, 00:22:25.148 "base_bdevs_list": [ 00:22:25.148 { 00:22:25.148 "name": "spare", 00:22:25.148 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:25.148 "is_configured": true, 00:22:25.148 "data_offset": 0, 00:22:25.148 "data_size": 65536 00:22:25.148 }, 00:22:25.148 { 00:22:25.148 "name": null, 00:22:25.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.148 "is_configured": false, 00:22:25.148 "data_offset": 0, 00:22:25.148 "data_size": 65536 00:22:25.148 }, 00:22:25.148 { 00:22:25.148 "name": "BaseBdev3", 00:22:25.148 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:25.148 "is_configured": true, 00:22:25.148 "data_offset": 0, 00:22:25.148 "data_size": 65536 00:22:25.148 }, 00:22:25.148 { 00:22:25.148 "name": "BaseBdev4", 00:22:25.148 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:25.148 "is_configured": true, 00:22:25.148 "data_offset": 0, 00:22:25.148 "data_size": 65536 00:22:25.148 } 00:22:25.148 ] 00:22:25.148 }' 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@657 -- # local timeout=481 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.148 22:30:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.406 22:30:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.406 "name": "raid_bdev1", 00:22:25.406 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:25.406 "strip_size_kb": 0, 00:22:25.406 "state": "online", 00:22:25.406 "raid_level": "raid1", 00:22:25.406 "superblock": false, 00:22:25.406 "num_base_bdevs": 4, 00:22:25.406 "num_base_bdevs_discovered": 3, 00:22:25.406 "num_base_bdevs_operational": 3, 00:22:25.406 "process": { 00:22:25.406 "type": "rebuild", 00:22:25.406 "target": "spare", 00:22:25.406 "progress": { 00:22:25.406 "blocks": 43008, 00:22:25.406 "percent": 65 00:22:25.406 } 00:22:25.406 }, 00:22:25.406 "base_bdevs_list": [ 00:22:25.406 { 00:22:25.406 "name": "spare", 00:22:25.406 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:25.406 "is_configured": true, 00:22:25.406 "data_offset": 0, 00:22:25.406 "data_size": 65536 00:22:25.406 }, 00:22:25.406 { 00:22:25.406 "name": null, 00:22:25.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.406 "is_configured": false, 00:22:25.406 "data_offset": 0, 00:22:25.406 "data_size": 65536 00:22:25.406 }, 00:22:25.406 { 00:22:25.406 "name": "BaseBdev3", 00:22:25.406 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:25.406 "is_configured": true, 00:22:25.406 "data_offset": 0, 00:22:25.406 "data_size": 65536 00:22:25.406 }, 00:22:25.406 { 00:22:25.406 "name": "BaseBdev4", 00:22:25.406 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:25.406 "is_configured": true, 00:22:25.406 "data_offset": 0, 00:22:25.406 "data_size": 65536 00:22:25.406 } 00:22:25.406 ] 00:22:25.406 }' 00:22:25.406 22:30:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.406 22:30:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.407 22:30:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.664 22:30:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.664 22:30:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:26.596 [2024-07-14 22:30:25.441036] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:26.596 [2024-07-14 22:30:25.441130] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:26.596 [2024-07-14 22:30:25.441223] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.596 22:30:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.854 "name": "raid_bdev1", 00:22:26.854 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:26.854 "strip_size_kb": 0, 00:22:26.854 "state": "online", 00:22:26.854 "raid_level": "raid1", 00:22:26.854 "superblock": false, 00:22:26.854 "num_base_bdevs": 4, 00:22:26.854 "num_base_bdevs_discovered": 3, 00:22:26.854 "num_base_bdevs_operational": 3, 00:22:26.854 "base_bdevs_list": [ 00:22:26.854 { 00:22:26.854 "name": "spare", 00:22:26.854 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:26.854 "is_configured": true, 00:22:26.854 "data_offset": 0, 00:22:26.854 "data_size": 65536 00:22:26.854 }, 00:22:26.854 { 00:22:26.854 "name": null, 00:22:26.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.854 "is_configured": false, 00:22:26.854 "data_offset": 0, 00:22:26.854 "data_size": 65536 00:22:26.854 }, 00:22:26.854 { 00:22:26.854 "name": "BaseBdev3", 00:22:26.854 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:26.854 "is_configured": true, 00:22:26.854 "data_offset": 0, 00:22:26.854 "data_size": 65536 00:22:26.854 }, 00:22:26.854 { 00:22:26.854 "name": "BaseBdev4", 00:22:26.854 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:26.854 "is_configured": true, 00:22:26.854 "data_offset": 0, 00:22:26.854 "data_size": 65536 00:22:26.854 } 00:22:26.854 ] 00:22:26.854 }' 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@660 -- # break 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.854 22:30:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.112 22:30:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.112 "name": "raid_bdev1", 00:22:27.112 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:27.112 "strip_size_kb": 0, 00:22:27.112 "state": "online", 00:22:27.112 "raid_level": "raid1", 00:22:27.112 "superblock": false, 00:22:27.112 "num_base_bdevs": 4, 00:22:27.112 "num_base_bdevs_discovered": 3, 00:22:27.112 "num_base_bdevs_operational": 3, 00:22:27.112 "base_bdevs_list": [ 00:22:27.112 { 00:22:27.112 "name": "spare", 00:22:27.112 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:27.112 "is_configured": true, 00:22:27.112 "data_offset": 0, 00:22:27.112 "data_size": 65536 00:22:27.112 }, 00:22:27.112 { 00:22:27.112 "name": null, 00:22:27.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.112 "is_configured": false, 00:22:27.112 "data_offset": 0, 00:22:27.112 "data_size": 65536 00:22:27.112 }, 00:22:27.112 { 00:22:27.112 "name": "BaseBdev3", 00:22:27.112 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:27.112 "is_configured": true, 00:22:27.112 "data_offset": 0, 00:22:27.112 "data_size": 65536 00:22:27.112 }, 00:22:27.112 { 00:22:27.112 "name": "BaseBdev4", 00:22:27.112 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:27.112 "is_configured": true, 00:22:27.112 "data_offset": 0, 00:22:27.112 "data_size": 65536 00:22:27.112 } 00:22:27.112 ] 00:22:27.112 }' 00:22:27.112 22:30:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.112 22:30:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.373 22:30:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.634 22:30:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:27.634 "name": "raid_bdev1", 00:22:27.634 "uuid": "c631ef35-00b5-4acf-9c03-f35f14582bce", 00:22:27.634 "strip_size_kb": 0, 00:22:27.634 "state": "online", 00:22:27.634 "raid_level": "raid1", 00:22:27.634 "superblock": false, 00:22:27.634 "num_base_bdevs": 4, 00:22:27.634 "num_base_bdevs_discovered": 3, 00:22:27.634 "num_base_bdevs_operational": 3, 00:22:27.634 "base_bdevs_list": [ 00:22:27.634 { 00:22:27.634 "name": "spare", 00:22:27.634 "uuid": "ea9f1417-ffe6-5bbb-bcbb-2db6c4a0670d", 00:22:27.634 "is_configured": true, 00:22:27.634 "data_offset": 0, 00:22:27.634 "data_size": 65536 00:22:27.634 }, 00:22:27.634 { 00:22:27.634 "name": null, 00:22:27.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.634 "is_configured": false, 00:22:27.634 "data_offset": 0, 00:22:27.634 "data_size": 65536 00:22:27.634 }, 00:22:27.634 { 00:22:27.634 "name": "BaseBdev3", 00:22:27.634 "uuid": "8e039ec6-5278-4d23-9be4-08dc5e222a70", 00:22:27.634 "is_configured": true, 00:22:27.634 "data_offset": 0, 00:22:27.634 "data_size": 65536 00:22:27.634 }, 00:22:27.634 { 00:22:27.634 "name": "BaseBdev4", 00:22:27.634 "uuid": "fb546ccf-4644-4d3a-9f47-0ed6a5f84dde", 00:22:27.634 "is_configured": true, 00:22:27.634 "data_offset": 0, 00:22:27.634 "data_size": 65536 00:22:27.634 } 00:22:27.634 ] 00:22:27.634 }' 00:22:27.634 22:30:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:27.634 22:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.200 22:30:27 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:28.458 [2024-07-14 22:30:27.314818] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:28.458 [2024-07-14 22:30:27.314872] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:28.458 [2024-07-14 22:30:27.314981] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.458 [2024-07-14 22:30:27.315117] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.458 [2024-07-14 22:30:27.315140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:28.458 22:30:27 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.458 22:30:27 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:28.716 22:30:27 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:28.716 22:30:27 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:28.716 22:30:27 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@12 -- # local i 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:28.716 22:30:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:28.974 /dev/nbd0 00:22:28.974 22:30:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:28.974 22:30:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:28.974 22:30:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:28.974 22:30:27 -- common/autotest_common.sh@857 -- # local i 00:22:28.974 22:30:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:28.974 22:30:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:28.974 22:30:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:28.974 22:30:27 -- common/autotest_common.sh@861 -- # break 00:22:28.974 22:30:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:28.974 22:30:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:28.974 22:30:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:28.974 1+0 records in 00:22:28.974 1+0 records out 00:22:28.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325251 s, 12.6 MB/s 00:22:28.974 22:30:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:28.974 22:30:27 -- common/autotest_common.sh@874 -- # size=4096 00:22:28.974 22:30:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:28.974 22:30:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:28.974 22:30:27 -- common/autotest_common.sh@877 -- # return 0 00:22:28.974 22:30:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:28.974 22:30:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:28.974 22:30:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:29.232 /dev/nbd1 00:22:29.232 22:30:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:29.232 22:30:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:29.232 22:30:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:29.232 22:30:28 -- common/autotest_common.sh@857 -- # local i 00:22:29.232 22:30:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:29.232 22:30:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:29.232 22:30:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:29.232 22:30:28 -- common/autotest_common.sh@861 -- # break 00:22:29.232 22:30:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:29.232 22:30:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:29.232 22:30:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.232 1+0 records in 00:22:29.232 1+0 records out 00:22:29.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411642 s, 10.0 MB/s 00:22:29.232 22:30:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.232 22:30:28 -- common/autotest_common.sh@874 -- # size=4096 00:22:29.232 22:30:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.232 22:30:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:29.232 22:30:28 -- common/autotest_common.sh@877 -- # return 0 00:22:29.232 22:30:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.232 22:30:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.232 22:30:28 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:29.490 22:30:28 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@51 -- # local i 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:29.490 22:30:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@41 -- # break 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@45 -- # return 0 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.749 22:30:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@41 -- # break 00:22:30.007 22:30:28 -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.007 22:30:28 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:30.007 22:30:28 -- bdev/bdev_raid.sh@709 -- # killprocess 127952 00:22:30.007 22:30:28 -- common/autotest_common.sh@926 -- # '[' -z 127952 ']' 00:22:30.007 22:30:28 -- common/autotest_common.sh@930 -- # kill -0 127952 00:22:30.007 22:30:28 -- common/autotest_common.sh@931 -- # uname 00:22:30.007 22:30:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:30.007 22:30:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127952 00:22:30.007 22:30:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:30.007 22:30:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:30.007 22:30:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127952' 00:22:30.007 killing process with pid 127952 00:22:30.007 22:30:28 -- common/autotest_common.sh@945 -- # kill 127952 00:22:30.007 Received shutdown signal, test time was about 60.000000 seconds 00:22:30.007 00:22:30.007 Latency(us) 00:22:30.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.007 =================================================================================================================== 00:22:30.007 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.007 22:30:28 -- common/autotest_common.sh@950 -- # wait 127952 00:22:30.007 [2024-07-14 22:30:28.992256] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:30.265 [2024-07-14 22:30:29.328282] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:31.200 22:30:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:31.200 00:22:31.200 real 0m23.179s 00:22:31.200 user 0m31.966s 00:22:31.200 sys 0m3.962s 00:22:31.200 22:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.200 22:30:30 -- common/autotest_common.sh@10 -- # set +x 00:22:31.459 ************************************ 00:22:31.459 END TEST raid_rebuild_test 00:22:31.459 ************************************ 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:22:31.459 22:30:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:31.459 22:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:31.459 22:30:30 -- common/autotest_common.sh@10 -- # set +x 00:22:31.459 ************************************ 00:22:31.459 START TEST raid_rebuild_test_sb 00:22:31.459 ************************************ 00:22:31.459 22:30:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=128546 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128546 /var/tmp/spdk-raid.sock 00:22:31.459 22:30:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:31.459 22:30:30 -- common/autotest_common.sh@819 -- # '[' -z 128546 ']' 00:22:31.459 22:30:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:31.459 22:30:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:31.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:31.459 22:30:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:31.459 22:30:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:31.459 22:30:30 -- common/autotest_common.sh@10 -- # set +x 00:22:31.459 [2024-07-14 22:30:30.405587] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:31.459 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:31.459 Zero copy mechanism will not be used. 00:22:31.459 [2024-07-14 22:30:30.405800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128546 ] 00:22:31.718 [2024-07-14 22:30:30.573892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.718 [2024-07-14 22:30:30.743614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.977 [2024-07-14 22:30:30.907102] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:32.545 22:30:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:32.545 22:30:31 -- common/autotest_common.sh@852 -- # return 0 00:22:32.545 22:30:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:32.545 22:30:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:32.545 22:30:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:32.545 BaseBdev1_malloc 00:22:32.545 22:30:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:32.804 [2024-07-14 22:30:31.769389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:32.804 [2024-07-14 22:30:31.769480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.804 [2024-07-14 22:30:31.769515] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:32.804 [2024-07-14 22:30:31.769596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.804 [2024-07-14 22:30:31.771578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.804 [2024-07-14 22:30:31.771631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:32.804 BaseBdev1 00:22:32.804 22:30:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:32.804 22:30:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:32.804 22:30:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:33.064 BaseBdev2_malloc 00:22:33.064 22:30:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:33.324 [2024-07-14 22:30:32.271657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:33.324 [2024-07-14 22:30:32.271732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.324 [2024-07-14 22:30:32.271791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:33.324 [2024-07-14 22:30:32.271856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.324 [2024-07-14 22:30:32.273979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.324 [2024-07-14 22:30:32.274030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:33.324 BaseBdev2 00:22:33.324 22:30:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:33.324 22:30:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:33.324 22:30:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:33.583 BaseBdev3_malloc 00:22:33.583 22:30:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:33.841 [2024-07-14 22:30:32.708702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:33.841 [2024-07-14 22:30:32.708775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.841 [2024-07-14 22:30:32.708817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:33.841 [2024-07-14 22:30:32.708862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.841 [2024-07-14 22:30:32.710977] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.841 [2024-07-14 22:30:32.711035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:33.841 BaseBdev3 00:22:33.841 22:30:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:33.841 22:30:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:33.841 22:30:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:34.100 BaseBdev4_malloc 00:22:34.100 22:30:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:34.357 [2024-07-14 22:30:33.194094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:34.357 [2024-07-14 22:30:33.194245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.358 [2024-07-14 22:30:33.194307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:34.358 [2024-07-14 22:30:33.194379] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.358 [2024-07-14 22:30:33.197559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.358 [2024-07-14 22:30:33.197635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:34.358 BaseBdev4 00:22:34.358 22:30:33 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:34.615 spare_malloc 00:22:34.615 22:30:33 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:34.615 spare_delay 00:22:34.615 22:30:33 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:34.872 [2024-07-14 22:30:33.815347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:34.872 [2024-07-14 22:30:33.815453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.872 [2024-07-14 22:30:33.815489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:34.872 [2024-07-14 22:30:33.815563] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.872 [2024-07-14 22:30:33.818173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.872 [2024-07-14 22:30:33.818238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:34.872 spare 00:22:34.872 22:30:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:35.130 [2024-07-14 22:30:34.015572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:35.130 [2024-07-14 22:30:34.017557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:35.130 [2024-07-14 22:30:34.017648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:35.130 [2024-07-14 22:30:34.017707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:35.130 [2024-07-14 22:30:34.017965] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:35.130 [2024-07-14 22:30:34.017987] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:35.130 [2024-07-14 22:30:34.018141] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:35.130 [2024-07-14 22:30:34.018535] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:35.130 [2024-07-14 22:30:34.018558] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:35.130 [2024-07-14 22:30:34.018737] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.130 22:30:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.388 22:30:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.388 "name": "raid_bdev1", 00:22:35.388 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:35.388 "strip_size_kb": 0, 00:22:35.388 "state": "online", 00:22:35.388 "raid_level": "raid1", 00:22:35.388 "superblock": true, 00:22:35.388 "num_base_bdevs": 4, 00:22:35.388 "num_base_bdevs_discovered": 4, 00:22:35.388 "num_base_bdevs_operational": 4, 00:22:35.388 "base_bdevs_list": [ 00:22:35.388 { 00:22:35.388 "name": "BaseBdev1", 00:22:35.388 "uuid": "ac17e730-63c5-5422-b4be-e45236f6b0c7", 00:22:35.388 "is_configured": true, 00:22:35.388 "data_offset": 2048, 00:22:35.388 "data_size": 63488 00:22:35.388 }, 00:22:35.388 { 00:22:35.388 "name": "BaseBdev2", 00:22:35.388 "uuid": "a80caea5-00a2-5d3f-a8f7-b0bf1a383844", 00:22:35.388 "is_configured": true, 00:22:35.388 "data_offset": 2048, 00:22:35.388 "data_size": 63488 00:22:35.388 }, 00:22:35.388 { 00:22:35.388 "name": "BaseBdev3", 00:22:35.388 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:35.388 "is_configured": true, 00:22:35.388 "data_offset": 2048, 00:22:35.388 "data_size": 63488 00:22:35.388 }, 00:22:35.388 { 00:22:35.388 "name": "BaseBdev4", 00:22:35.388 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:35.388 "is_configured": true, 00:22:35.388 "data_offset": 2048, 00:22:35.388 "data_size": 63488 00:22:35.388 } 00:22:35.388 ] 00:22:35.388 }' 00:22:35.388 22:30:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.388 22:30:34 -- common/autotest_common.sh@10 -- # set +x 00:22:35.953 22:30:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:35.953 22:30:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:36.210 [2024-07-14 22:30:35.172051] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.210 22:30:35 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:36.210 22:30:35 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.210 22:30:35 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:36.468 22:30:35 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:36.468 22:30:35 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:36.468 22:30:35 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:36.468 22:30:35 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.468 22:30:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:36.726 [2024-07-14 22:30:35.591948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:36.726 /dev/nbd0 00:22:36.726 22:30:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:36.726 22:30:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:36.726 22:30:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:36.726 22:30:35 -- common/autotest_common.sh@857 -- # local i 00:22:36.726 22:30:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:36.726 22:30:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:36.726 22:30:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:36.726 22:30:35 -- common/autotest_common.sh@861 -- # break 00:22:36.726 22:30:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:36.726 22:30:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:36.726 22:30:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.726 1+0 records in 00:22:36.726 1+0 records out 00:22:36.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311811 s, 13.1 MB/s 00:22:36.726 22:30:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.726 22:30:35 -- common/autotest_common.sh@874 -- # size=4096 00:22:36.726 22:30:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.726 22:30:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:36.726 22:30:35 -- common/autotest_common.sh@877 -- # return 0 00:22:36.726 22:30:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.726 22:30:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.726 22:30:35 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:36.726 22:30:35 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:36.726 22:30:35 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:43.333 63488+0 records in 00:22:43.333 63488+0 records out 00:22:43.333 32505856 bytes (33 MB, 31 MiB) copied, 6.19785 s, 5.2 MB/s 00:22:43.333 22:30:41 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:43.333 22:30:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:43.333 22:30:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:43.333 22:30:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:43.333 22:30:41 -- bdev/nbd_common.sh@51 -- # local i 00:22:43.333 22:30:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:43.333 22:30:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:43.333 [2024-07-14 22:30:42.064121] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:43.333 22:30:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:43.334 22:30:42 -- bdev/nbd_common.sh@41 -- # break 00:22:43.334 22:30:42 -- bdev/nbd_common.sh@45 -- # return 0 00:22:43.334 22:30:42 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:43.334 [2024-07-14 22:30:42.395888] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.592 22:30:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.850 22:30:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.850 "name": "raid_bdev1", 00:22:43.850 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:43.850 "strip_size_kb": 0, 00:22:43.850 "state": "online", 00:22:43.850 "raid_level": "raid1", 00:22:43.850 "superblock": true, 00:22:43.850 "num_base_bdevs": 4, 00:22:43.850 "num_base_bdevs_discovered": 3, 00:22:43.850 "num_base_bdevs_operational": 3, 00:22:43.850 "base_bdevs_list": [ 00:22:43.850 { 00:22:43.850 "name": null, 00:22:43.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.850 "is_configured": false, 00:22:43.850 "data_offset": 2048, 00:22:43.850 "data_size": 63488 00:22:43.850 }, 00:22:43.850 { 00:22:43.850 "name": "BaseBdev2", 00:22:43.850 "uuid": "a80caea5-00a2-5d3f-a8f7-b0bf1a383844", 00:22:43.850 "is_configured": true, 00:22:43.850 "data_offset": 2048, 00:22:43.850 "data_size": 63488 00:22:43.850 }, 00:22:43.850 { 00:22:43.850 "name": "BaseBdev3", 00:22:43.850 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:43.850 "is_configured": true, 00:22:43.850 "data_offset": 2048, 00:22:43.850 "data_size": 63488 00:22:43.850 }, 00:22:43.850 { 00:22:43.850 "name": "BaseBdev4", 00:22:43.850 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:43.850 "is_configured": true, 00:22:43.850 "data_offset": 2048, 00:22:43.850 "data_size": 63488 00:22:43.850 } 00:22:43.850 ] 00:22:43.850 }' 00:22:43.850 22:30:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.850 22:30:42 -- common/autotest_common.sh@10 -- # set +x 00:22:44.416 22:30:43 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:44.675 [2024-07-14 22:30:43.560062] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:44.675 [2024-07-14 22:30:43.560146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:44.675 [2024-07-14 22:30:43.570406] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4610 00:22:44.675 [2024-07-14 22:30:43.572224] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:44.675 22:30:43 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.610 22:30:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.868 22:30:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.868 "name": "raid_bdev1", 00:22:45.868 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:45.869 "strip_size_kb": 0, 00:22:45.869 "state": "online", 00:22:45.869 "raid_level": "raid1", 00:22:45.869 "superblock": true, 00:22:45.869 "num_base_bdevs": 4, 00:22:45.869 "num_base_bdevs_discovered": 4, 00:22:45.869 "num_base_bdevs_operational": 4, 00:22:45.869 "process": { 00:22:45.869 "type": "rebuild", 00:22:45.869 "target": "spare", 00:22:45.869 "progress": { 00:22:45.869 "blocks": 24576, 00:22:45.869 "percent": 38 00:22:45.869 } 00:22:45.869 }, 00:22:45.869 "base_bdevs_list": [ 00:22:45.869 { 00:22:45.869 "name": "spare", 00:22:45.869 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:45.869 "is_configured": true, 00:22:45.869 "data_offset": 2048, 00:22:45.869 "data_size": 63488 00:22:45.869 }, 00:22:45.869 { 00:22:45.869 "name": "BaseBdev2", 00:22:45.869 "uuid": "a80caea5-00a2-5d3f-a8f7-b0bf1a383844", 00:22:45.869 "is_configured": true, 00:22:45.869 "data_offset": 2048, 00:22:45.869 "data_size": 63488 00:22:45.869 }, 00:22:45.869 { 00:22:45.869 "name": "BaseBdev3", 00:22:45.869 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:45.869 "is_configured": true, 00:22:45.869 "data_offset": 2048, 00:22:45.869 "data_size": 63488 00:22:45.869 }, 00:22:45.869 { 00:22:45.869 "name": "BaseBdev4", 00:22:45.869 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:45.869 "is_configured": true, 00:22:45.869 "data_offset": 2048, 00:22:45.869 "data_size": 63488 00:22:45.869 } 00:22:45.869 ] 00:22:45.869 }' 00:22:45.869 22:30:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.869 22:30:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.869 22:30:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:46.139 22:30:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.139 22:30:44 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:46.139 [2024-07-14 22:30:45.167021] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.139 [2024-07-14 22:30:45.180861] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:46.139 [2024-07-14 22:30:45.180946] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.406 "name": "raid_bdev1", 00:22:46.406 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:46.406 "strip_size_kb": 0, 00:22:46.406 "state": "online", 00:22:46.406 "raid_level": "raid1", 00:22:46.406 "superblock": true, 00:22:46.406 "num_base_bdevs": 4, 00:22:46.406 "num_base_bdevs_discovered": 3, 00:22:46.406 "num_base_bdevs_operational": 3, 00:22:46.406 "base_bdevs_list": [ 00:22:46.406 { 00:22:46.406 "name": null, 00:22:46.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.406 "is_configured": false, 00:22:46.406 "data_offset": 2048, 00:22:46.406 "data_size": 63488 00:22:46.406 }, 00:22:46.406 { 00:22:46.406 "name": "BaseBdev2", 00:22:46.406 "uuid": "a80caea5-00a2-5d3f-a8f7-b0bf1a383844", 00:22:46.406 "is_configured": true, 00:22:46.406 "data_offset": 2048, 00:22:46.406 "data_size": 63488 00:22:46.406 }, 00:22:46.406 { 00:22:46.406 "name": "BaseBdev3", 00:22:46.406 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:46.406 "is_configured": true, 00:22:46.406 "data_offset": 2048, 00:22:46.406 "data_size": 63488 00:22:46.406 }, 00:22:46.406 { 00:22:46.406 "name": "BaseBdev4", 00:22:46.406 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:46.406 "is_configured": true, 00:22:46.406 "data_offset": 2048, 00:22:46.406 "data_size": 63488 00:22:46.406 } 00:22:46.406 ] 00:22:46.406 }' 00:22:46.406 22:30:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.406 22:30:45 -- common/autotest_common.sh@10 -- # set +x 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.338 "name": "raid_bdev1", 00:22:47.338 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:47.338 "strip_size_kb": 0, 00:22:47.338 "state": "online", 00:22:47.338 "raid_level": "raid1", 00:22:47.338 "superblock": true, 00:22:47.338 "num_base_bdevs": 4, 00:22:47.338 "num_base_bdevs_discovered": 3, 00:22:47.338 "num_base_bdevs_operational": 3, 00:22:47.338 "base_bdevs_list": [ 00:22:47.338 { 00:22:47.338 "name": null, 00:22:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.338 "is_configured": false, 00:22:47.338 "data_offset": 2048, 00:22:47.338 "data_size": 63488 00:22:47.338 }, 00:22:47.338 { 00:22:47.338 "name": "BaseBdev2", 00:22:47.338 "uuid": "a80caea5-00a2-5d3f-a8f7-b0bf1a383844", 00:22:47.338 "is_configured": true, 00:22:47.338 "data_offset": 2048, 00:22:47.338 "data_size": 63488 00:22:47.338 }, 00:22:47.338 { 00:22:47.338 "name": "BaseBdev3", 00:22:47.338 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:47.338 "is_configured": true, 00:22:47.338 "data_offset": 2048, 00:22:47.338 "data_size": 63488 00:22:47.338 }, 00:22:47.338 { 00:22:47.338 "name": "BaseBdev4", 00:22:47.338 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:47.338 "is_configured": true, 00:22:47.338 "data_offset": 2048, 00:22:47.338 "data_size": 63488 00:22:47.338 } 00:22:47.338 ] 00:22:47.338 }' 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:47.338 22:30:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.596 22:30:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:47.596 22:30:46 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:47.596 [2024-07-14 22:30:46.620642] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:47.596 [2024-07-14 22:30:46.620730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:47.596 [2024-07-14 22:30:46.631114] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca47b0 00:22:47.596 [2024-07-14 22:30:46.633243] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:47.596 22:30:46 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.972 "name": "raid_bdev1", 00:22:48.972 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:48.972 "strip_size_kb": 0, 00:22:48.972 "state": "online", 00:22:48.972 "raid_level": "raid1", 00:22:48.972 "superblock": true, 00:22:48.972 "num_base_bdevs": 4, 00:22:48.972 "num_base_bdevs_discovered": 4, 00:22:48.972 "num_base_bdevs_operational": 4, 00:22:48.972 "process": { 00:22:48.972 "type": "rebuild", 00:22:48.972 "target": "spare", 00:22:48.972 "progress": { 00:22:48.972 "blocks": 22528, 00:22:48.972 "percent": 35 00:22:48.972 } 00:22:48.972 }, 00:22:48.972 "base_bdevs_list": [ 00:22:48.972 { 00:22:48.972 "name": "spare", 00:22:48.972 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:48.972 "is_configured": true, 00:22:48.972 "data_offset": 2048, 00:22:48.972 "data_size": 63488 00:22:48.972 }, 00:22:48.972 { 00:22:48.972 "name": "BaseBdev2", 00:22:48.972 "uuid": "a80caea5-00a2-5d3f-a8f7-b0bf1a383844", 00:22:48.972 "is_configured": true, 00:22:48.972 "data_offset": 2048, 00:22:48.972 "data_size": 63488 00:22:48.972 }, 00:22:48.972 { 00:22:48.972 "name": "BaseBdev3", 00:22:48.972 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:48.972 "is_configured": true, 00:22:48.972 "data_offset": 2048, 00:22:48.972 "data_size": 63488 00:22:48.972 }, 00:22:48.972 { 00:22:48.972 "name": "BaseBdev4", 00:22:48.972 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:48.972 "is_configured": true, 00:22:48.972 "data_offset": 2048, 00:22:48.972 "data_size": 63488 00:22:48.972 } 00:22:48.972 ] 00:22:48.972 }' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:48.972 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:48.972 22:30:47 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:49.231 [2024-07-14 22:30:48.183267] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:49.231 [2024-07-14 22:30:48.241990] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca47b0 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.490 22:30:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.748 22:30:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.748 "name": "raid_bdev1", 00:22:49.748 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:49.748 "strip_size_kb": 0, 00:22:49.748 "state": "online", 00:22:49.748 "raid_level": "raid1", 00:22:49.748 "superblock": true, 00:22:49.748 "num_base_bdevs": 4, 00:22:49.748 "num_base_bdevs_discovered": 3, 00:22:49.748 "num_base_bdevs_operational": 3, 00:22:49.748 "process": { 00:22:49.748 "type": "rebuild", 00:22:49.748 "target": "spare", 00:22:49.748 "progress": { 00:22:49.748 "blocks": 38912, 00:22:49.748 "percent": 61 00:22:49.748 } 00:22:49.748 }, 00:22:49.748 "base_bdevs_list": [ 00:22:49.748 { 00:22:49.748 "name": "spare", 00:22:49.748 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:49.748 "is_configured": true, 00:22:49.748 "data_offset": 2048, 00:22:49.748 "data_size": 63488 00:22:49.748 }, 00:22:49.749 { 00:22:49.749 "name": null, 00:22:49.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.749 "is_configured": false, 00:22:49.749 "data_offset": 2048, 00:22:49.749 "data_size": 63488 00:22:49.749 }, 00:22:49.749 { 00:22:49.749 "name": "BaseBdev3", 00:22:49.749 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:49.749 "is_configured": true, 00:22:49.749 "data_offset": 2048, 00:22:49.749 "data_size": 63488 00:22:49.749 }, 00:22:49.749 { 00:22:49.749 "name": "BaseBdev4", 00:22:49.749 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:49.749 "is_configured": true, 00:22:49.749 "data_offset": 2048, 00:22:49.749 "data_size": 63488 00:22:49.749 } 00:22:49.749 ] 00:22:49.749 }' 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@657 -- # local timeout=505 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.749 22:30:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.008 22:30:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:50.008 "name": "raid_bdev1", 00:22:50.008 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:50.008 "strip_size_kb": 0, 00:22:50.008 "state": "online", 00:22:50.008 "raid_level": "raid1", 00:22:50.008 "superblock": true, 00:22:50.008 "num_base_bdevs": 4, 00:22:50.008 "num_base_bdevs_discovered": 3, 00:22:50.008 "num_base_bdevs_operational": 3, 00:22:50.008 "process": { 00:22:50.008 "type": "rebuild", 00:22:50.008 "target": "spare", 00:22:50.008 "progress": { 00:22:50.008 "blocks": 45056, 00:22:50.008 "percent": 70 00:22:50.008 } 00:22:50.008 }, 00:22:50.008 "base_bdevs_list": [ 00:22:50.008 { 00:22:50.008 "name": "spare", 00:22:50.008 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:50.008 "is_configured": true, 00:22:50.008 "data_offset": 2048, 00:22:50.008 "data_size": 63488 00:22:50.008 }, 00:22:50.008 { 00:22:50.008 "name": null, 00:22:50.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.008 "is_configured": false, 00:22:50.008 "data_offset": 2048, 00:22:50.008 "data_size": 63488 00:22:50.008 }, 00:22:50.008 { 00:22:50.008 "name": "BaseBdev3", 00:22:50.008 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:50.008 "is_configured": true, 00:22:50.008 "data_offset": 2048, 00:22:50.008 "data_size": 63488 00:22:50.008 }, 00:22:50.008 { 00:22:50.008 "name": "BaseBdev4", 00:22:50.008 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:50.008 "is_configured": true, 00:22:50.008 "data_offset": 2048, 00:22:50.008 "data_size": 63488 00:22:50.008 } 00:22:50.008 ] 00:22:50.008 }' 00:22:50.008 22:30:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:50.008 22:30:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.008 22:30:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:50.008 22:30:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.008 22:30:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:50.943 [2024-07-14 22:30:49.749739] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:50.943 [2024-07-14 22:30:49.749826] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:50.943 [2024-07-14 22:30:49.749984] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.202 22:30:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.461 "name": "raid_bdev1", 00:22:51.461 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:51.461 "strip_size_kb": 0, 00:22:51.461 "state": "online", 00:22:51.461 "raid_level": "raid1", 00:22:51.461 "superblock": true, 00:22:51.461 "num_base_bdevs": 4, 00:22:51.461 "num_base_bdevs_discovered": 3, 00:22:51.461 "num_base_bdevs_operational": 3, 00:22:51.461 "base_bdevs_list": [ 00:22:51.461 { 00:22:51.461 "name": "spare", 00:22:51.461 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 2048, 00:22:51.461 "data_size": 63488 00:22:51.461 }, 00:22:51.461 { 00:22:51.461 "name": null, 00:22:51.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.461 "is_configured": false, 00:22:51.461 "data_offset": 2048, 00:22:51.461 "data_size": 63488 00:22:51.461 }, 00:22:51.461 { 00:22:51.461 "name": "BaseBdev3", 00:22:51.461 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 2048, 00:22:51.461 "data_size": 63488 00:22:51.461 }, 00:22:51.461 { 00:22:51.461 "name": "BaseBdev4", 00:22:51.461 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 2048, 00:22:51.461 "data_size": 63488 00:22:51.461 } 00:22:51.461 ] 00:22:51.461 }' 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@660 -- # break 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.461 22:30:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.721 "name": "raid_bdev1", 00:22:51.721 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:51.721 "strip_size_kb": 0, 00:22:51.721 "state": "online", 00:22:51.721 "raid_level": "raid1", 00:22:51.721 "superblock": true, 00:22:51.721 "num_base_bdevs": 4, 00:22:51.721 "num_base_bdevs_discovered": 3, 00:22:51.721 "num_base_bdevs_operational": 3, 00:22:51.721 "base_bdevs_list": [ 00:22:51.721 { 00:22:51.721 "name": "spare", 00:22:51.721 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:51.721 "is_configured": true, 00:22:51.721 "data_offset": 2048, 00:22:51.721 "data_size": 63488 00:22:51.721 }, 00:22:51.721 { 00:22:51.721 "name": null, 00:22:51.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.721 "is_configured": false, 00:22:51.721 "data_offset": 2048, 00:22:51.721 "data_size": 63488 00:22:51.721 }, 00:22:51.721 { 00:22:51.721 "name": "BaseBdev3", 00:22:51.721 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:51.721 "is_configured": true, 00:22:51.721 "data_offset": 2048, 00:22:51.721 "data_size": 63488 00:22:51.721 }, 00:22:51.721 { 00:22:51.721 "name": "BaseBdev4", 00:22:51.721 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:51.721 "is_configured": true, 00:22:51.721 "data_offset": 2048, 00:22:51.721 "data_size": 63488 00:22:51.721 } 00:22:51.721 ] 00:22:51.721 }' 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.721 22:30:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.979 22:30:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.979 "name": "raid_bdev1", 00:22:51.979 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:51.979 "strip_size_kb": 0, 00:22:51.979 "state": "online", 00:22:51.979 "raid_level": "raid1", 00:22:51.979 "superblock": true, 00:22:51.979 "num_base_bdevs": 4, 00:22:51.979 "num_base_bdevs_discovered": 3, 00:22:51.979 "num_base_bdevs_operational": 3, 00:22:51.979 "base_bdevs_list": [ 00:22:51.979 { 00:22:51.979 "name": "spare", 00:22:51.979 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:51.980 "is_configured": true, 00:22:51.980 "data_offset": 2048, 00:22:51.980 "data_size": 63488 00:22:51.980 }, 00:22:51.980 { 00:22:51.980 "name": null, 00:22:51.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.980 "is_configured": false, 00:22:51.980 "data_offset": 2048, 00:22:51.980 "data_size": 63488 00:22:51.980 }, 00:22:51.980 { 00:22:51.980 "name": "BaseBdev3", 00:22:51.980 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:51.980 "is_configured": true, 00:22:51.980 "data_offset": 2048, 00:22:51.980 "data_size": 63488 00:22:51.980 }, 00:22:51.980 { 00:22:51.980 "name": "BaseBdev4", 00:22:51.980 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:51.980 "is_configured": true, 00:22:51.980 "data_offset": 2048, 00:22:51.980 "data_size": 63488 00:22:51.980 } 00:22:51.980 ] 00:22:51.980 }' 00:22:51.980 22:30:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.980 22:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:52.914 22:30:51 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:52.914 [2024-07-14 22:30:51.917562] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:52.914 [2024-07-14 22:30:51.917593] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:52.914 [2024-07-14 22:30:51.917676] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.915 [2024-07-14 22:30:51.917760] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:52.915 [2024-07-14 22:30:51.917774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:52.915 22:30:51 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.915 22:30:51 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:53.182 22:30:52 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:53.182 22:30:52 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:53.182 22:30:52 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@12 -- # local i 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.182 22:30:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:53.477 /dev/nbd0 00:22:53.477 22:30:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:53.477 22:30:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:53.477 22:30:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:53.477 22:30:52 -- common/autotest_common.sh@857 -- # local i 00:22:53.477 22:30:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:53.477 22:30:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:53.477 22:30:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:53.477 22:30:52 -- common/autotest_common.sh@861 -- # break 00:22:53.477 22:30:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:53.477 22:30:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:53.477 22:30:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.477 1+0 records in 00:22:53.477 1+0 records out 00:22:53.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207888 s, 19.7 MB/s 00:22:53.477 22:30:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:53.477 22:30:52 -- common/autotest_common.sh@874 -- # size=4096 00:22:53.477 22:30:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:53.477 22:30:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:53.477 22:30:52 -- common/autotest_common.sh@877 -- # return 0 00:22:53.477 22:30:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.477 22:30:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.477 22:30:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:53.736 /dev/nbd1 00:22:53.736 22:30:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:53.736 22:30:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:53.736 22:30:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:53.736 22:30:52 -- common/autotest_common.sh@857 -- # local i 00:22:53.736 22:30:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:53.736 22:30:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:53.736 22:30:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:53.736 22:30:52 -- common/autotest_common.sh@861 -- # break 00:22:53.736 22:30:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:53.736 22:30:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:53.736 22:30:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.736 1+0 records in 00:22:53.736 1+0 records out 00:22:53.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306962 s, 13.3 MB/s 00:22:53.736 22:30:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:53.736 22:30:52 -- common/autotest_common.sh@874 -- # size=4096 00:22:53.736 22:30:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:53.736 22:30:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:53.736 22:30:52 -- common/autotest_common.sh@877 -- # return 0 00:22:53.736 22:30:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.736 22:30:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.736 22:30:52 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:53.994 22:30:52 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:53.994 22:30:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.994 22:30:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:53.994 22:30:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:53.994 22:30:52 -- bdev/nbd_common.sh@51 -- # local i 00:22:53.994 22:30:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:53.994 22:30:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@41 -- # break 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.253 22:30:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@41 -- # break 00:22:54.510 22:30:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.510 22:30:53 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:54.510 22:30:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:54.511 22:30:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:54.511 22:30:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:54.768 22:30:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.025 [2024-07-14 22:30:53.998217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.025 [2024-07-14 22:30:53.998312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.025 [2024-07-14 22:30:53.998358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:55.025 [2024-07-14 22:30:53.998392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.025 [2024-07-14 22:30:54.000817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.025 [2024-07-14 22:30:54.000906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.025 [2024-07-14 22:30:54.001052] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:55.025 [2024-07-14 22:30:54.001133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.025 BaseBdev1 00:22:55.025 22:30:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.025 22:30:54 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:55.025 22:30:54 -- bdev/bdev_raid.sh@696 -- # continue 00:22:55.025 22:30:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.025 22:30:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:55.025 22:30:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:55.283 22:30:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:55.540 [2024-07-14 22:30:54.394286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:55.540 [2024-07-14 22:30:54.394362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.540 [2024-07-14 22:30:54.394402] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:55.540 [2024-07-14 22:30:54.394424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.540 [2024-07-14 22:30:54.394894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.540 [2024-07-14 22:30:54.395013] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:55.540 [2024-07-14 22:30:54.395103] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:55.540 [2024-07-14 22:30:54.395119] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:55.540 [2024-07-14 22:30:54.395126] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.540 [2024-07-14 22:30:54.395160] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:55.540 [2024-07-14 22:30:54.395227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:55.540 BaseBdev3 00:22:55.540 22:30:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.540 22:30:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:55.540 22:30:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:55.798 22:30:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:56.057 [2024-07-14 22:30:54.906379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:56.057 [2024-07-14 22:30:54.906446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.057 [2024-07-14 22:30:54.906482] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:56.057 [2024-07-14 22:30:54.906511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.057 [2024-07-14 22:30:54.906943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.057 [2024-07-14 22:30:54.907010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:56.057 [2024-07-14 22:30:54.907129] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:56.057 [2024-07-14 22:30:54.907156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:56.057 BaseBdev4 00:22:56.057 22:30:54 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:56.315 [2024-07-14 22:30:55.362455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.315 [2024-07-14 22:30:55.362520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.315 [2024-07-14 22:30:55.362552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:56.315 [2024-07-14 22:30:55.362582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.315 [2024-07-14 22:30:55.363057] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.315 [2024-07-14 22:30:55.363137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.315 [2024-07-14 22:30:55.363265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:56.315 [2024-07-14 22:30:55.363305] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.315 spare 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.315 22:30:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.574 [2024-07-14 22:30:55.463417] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:56.574 [2024-07-14 22:30:55.463437] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:56.574 [2024-07-14 22:30:55.463575] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc53c0 00:22:56.574 [2024-07-14 22:30:55.464024] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:56.574 [2024-07-14 22:30:55.464047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:56.574 [2024-07-14 22:30:55.464244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.574 22:30:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.574 "name": "raid_bdev1", 00:22:56.574 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:56.574 "strip_size_kb": 0, 00:22:56.574 "state": "online", 00:22:56.574 "raid_level": "raid1", 00:22:56.574 "superblock": true, 00:22:56.574 "num_base_bdevs": 4, 00:22:56.574 "num_base_bdevs_discovered": 3, 00:22:56.574 "num_base_bdevs_operational": 3, 00:22:56.574 "base_bdevs_list": [ 00:22:56.574 { 00:22:56.574 "name": "spare", 00:22:56.574 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:56.574 "is_configured": true, 00:22:56.574 "data_offset": 2048, 00:22:56.574 "data_size": 63488 00:22:56.574 }, 00:22:56.574 { 00:22:56.574 "name": null, 00:22:56.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.574 "is_configured": false, 00:22:56.574 "data_offset": 2048, 00:22:56.574 "data_size": 63488 00:22:56.574 }, 00:22:56.574 { 00:22:56.574 "name": "BaseBdev3", 00:22:56.574 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:56.574 "is_configured": true, 00:22:56.574 "data_offset": 2048, 00:22:56.574 "data_size": 63488 00:22:56.574 }, 00:22:56.574 { 00:22:56.574 "name": "BaseBdev4", 00:22:56.574 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:56.574 "is_configured": true, 00:22:56.574 "data_offset": 2048, 00:22:56.574 "data_size": 63488 00:22:56.574 } 00:22:56.574 ] 00:22:56.574 }' 00:22:56.574 22:30:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.574 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.509 "name": "raid_bdev1", 00:22:57.509 "uuid": "04e8d39b-70d4-438d-aaad-d1fc65a78b8d", 00:22:57.509 "strip_size_kb": 0, 00:22:57.509 "state": "online", 00:22:57.509 "raid_level": "raid1", 00:22:57.509 "superblock": true, 00:22:57.509 "num_base_bdevs": 4, 00:22:57.509 "num_base_bdevs_discovered": 3, 00:22:57.509 "num_base_bdevs_operational": 3, 00:22:57.509 "base_bdevs_list": [ 00:22:57.509 { 00:22:57.509 "name": "spare", 00:22:57.509 "uuid": "aa39a782-84fc-5d6b-9ec3-e414a0dab773", 00:22:57.509 "is_configured": true, 00:22:57.509 "data_offset": 2048, 00:22:57.509 "data_size": 63488 00:22:57.509 }, 00:22:57.509 { 00:22:57.509 "name": null, 00:22:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.509 "is_configured": false, 00:22:57.509 "data_offset": 2048, 00:22:57.509 "data_size": 63488 00:22:57.509 }, 00:22:57.509 { 00:22:57.509 "name": "BaseBdev3", 00:22:57.509 "uuid": "5cf86ec9-42ab-5a23-967a-ff32d3f97eda", 00:22:57.509 "is_configured": true, 00:22:57.509 "data_offset": 2048, 00:22:57.509 "data_size": 63488 00:22:57.509 }, 00:22:57.509 { 00:22:57.509 "name": "BaseBdev4", 00:22:57.509 "uuid": "a724cda7-8cf5-542b-a9cc-9bb0afbcb509", 00:22:57.509 "is_configured": true, 00:22:57.509 "data_offset": 2048, 00:22:57.509 "data_size": 63488 00:22:57.509 } 00:22:57.509 ] 00:22:57.509 }' 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:57.509 22:30:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.768 22:30:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:57.768 22:30:56 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:57.768 22:30:56 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.768 22:30:56 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.768 22:30:56 -- bdev/bdev_raid.sh@709 -- # killprocess 128546 00:22:57.768 22:30:56 -- common/autotest_common.sh@926 -- # '[' -z 128546 ']' 00:22:57.768 22:30:56 -- common/autotest_common.sh@930 -- # kill -0 128546 00:22:57.768 22:30:56 -- common/autotest_common.sh@931 -- # uname 00:22:57.768 22:30:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:57.768 22:30:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128546 00:22:57.768 22:30:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:57.768 22:30:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:57.768 22:30:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128546' 00:22:57.768 killing process with pid 128546 00:22:57.768 22:30:56 -- common/autotest_common.sh@945 -- # kill 128546 00:22:57.768 Received shutdown signal, test time was about 60.000000 seconds 00:22:57.768 00:22:57.768 Latency(us) 00:22:57.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.768 =================================================================================================================== 00:22:57.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.768 22:30:56 -- common/autotest_common.sh@950 -- # wait 128546 00:22:57.768 [2024-07-14 22:30:56.811615] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.768 [2024-07-14 22:30:56.811677] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.768 [2024-07-14 22:30:56.811841] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.768 [2024-07-14 22:30:56.811864] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:22:58.335 [2024-07-14 22:30:57.124801] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:59.271 00:22:59.271 real 0m27.714s 00:22:59.271 user 0m40.120s 00:22:59.271 sys 0m4.055s 00:22:59.271 22:30:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.271 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:22:59.271 ************************************ 00:22:59.271 END TEST raid_rebuild_test_sb 00:22:59.271 ************************************ 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:59.271 22:30:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:59.271 22:30:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:59.271 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:22:59.271 ************************************ 00:22:59.271 START TEST raid_rebuild_test_io 00:22:59.271 ************************************ 00:22:59.271 22:30:58 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@544 -- # raid_pid=129269 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129269 /var/tmp/spdk-raid.sock 00:22:59.271 22:30:58 -- common/autotest_common.sh@819 -- # '[' -z 129269 ']' 00:22:59.271 22:30:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:59.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:59.271 22:30:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:59.271 22:30:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:59.271 22:30:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:59.271 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:22:59.271 22:30:58 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:59.271 [2024-07-14 22:30:58.190403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:59.271 [2024-07-14 22:30:58.191600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129269 ] 00:22:59.271 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:59.271 Zero copy mechanism will not be used. 00:22:59.530 [2024-07-14 22:30:58.359139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.530 [2024-07-14 22:30:58.517131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.788 [2024-07-14 22:30:58.680020] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.048 22:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.048 22:30:59 -- common/autotest_common.sh@852 -- # return 0 00:23:00.048 22:30:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:00.048 22:30:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:00.048 22:30:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:00.307 BaseBdev1 00:23:00.307 22:30:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:00.307 22:30:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:00.307 22:30:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.565 BaseBdev2 00:23:00.565 22:30:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:00.566 22:30:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:00.566 22:30:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:00.823 BaseBdev3 00:23:00.823 22:30:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:00.823 22:30:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:00.823 22:30:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:01.083 BaseBdev4 00:23:01.083 22:30:59 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:01.340 spare_malloc 00:23:01.340 22:31:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:01.340 spare_delay 00:23:01.340 22:31:00 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:01.599 [2024-07-14 22:31:00.559810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:01.599 [2024-07-14 22:31:00.559882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.599 [2024-07-14 22:31:00.559914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:01.599 [2024-07-14 22:31:00.559956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.599 [2024-07-14 22:31:00.561815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.599 [2024-07-14 22:31:00.561860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:01.599 spare 00:23:01.599 22:31:00 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:01.857 [2024-07-14 22:31:00.727873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.857 [2024-07-14 22:31:00.729341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:01.857 [2024-07-14 22:31:00.729398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:01.857 [2024-07-14 22:31:00.729439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:01.857 [2024-07-14 22:31:00.729513] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:01.857 [2024-07-14 22:31:00.729526] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:01.857 [2024-07-14 22:31:00.729668] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:01.857 [2024-07-14 22:31:00.729987] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:01.857 [2024-07-14 22:31:00.730011] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:23:01.857 [2024-07-14 22:31:00.730167] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.857 22:31:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.115 22:31:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.115 "name": "raid_bdev1", 00:23:02.115 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:02.115 "strip_size_kb": 0, 00:23:02.115 "state": "online", 00:23:02.115 "raid_level": "raid1", 00:23:02.115 "superblock": false, 00:23:02.115 "num_base_bdevs": 4, 00:23:02.115 "num_base_bdevs_discovered": 4, 00:23:02.115 "num_base_bdevs_operational": 4, 00:23:02.115 "base_bdevs_list": [ 00:23:02.115 { 00:23:02.115 "name": "BaseBdev1", 00:23:02.115 "uuid": "957a80e2-8b73-42db-beda-b25e734b639d", 00:23:02.115 "is_configured": true, 00:23:02.115 "data_offset": 0, 00:23:02.115 "data_size": 65536 00:23:02.115 }, 00:23:02.115 { 00:23:02.115 "name": "BaseBdev2", 00:23:02.115 "uuid": "f35dff07-9db3-40f3-8445-65a3b217b222", 00:23:02.115 "is_configured": true, 00:23:02.115 "data_offset": 0, 00:23:02.115 "data_size": 65536 00:23:02.115 }, 00:23:02.115 { 00:23:02.115 "name": "BaseBdev3", 00:23:02.115 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:02.115 "is_configured": true, 00:23:02.115 "data_offset": 0, 00:23:02.115 "data_size": 65536 00:23:02.115 }, 00:23:02.115 { 00:23:02.115 "name": "BaseBdev4", 00:23:02.116 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:02.116 "is_configured": true, 00:23:02.116 "data_offset": 0, 00:23:02.116 "data_size": 65536 00:23:02.116 } 00:23:02.116 ] 00:23:02.116 }' 00:23:02.116 22:31:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.116 22:31:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.682 22:31:01 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:02.682 22:31:01 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:02.940 [2024-07-14 22:31:01.872343] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.940 22:31:01 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:02.940 22:31:01 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:02.940 22:31:01 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.198 22:31:02 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:03.198 22:31:02 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:03.198 22:31:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:03.199 22:31:02 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:03.199 [2024-07-14 22:31:02.238533] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:03.199 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:03.199 Zero copy mechanism will not be used. 00:23:03.199 Running I/O for 60 seconds... 00:23:03.457 [2024-07-14 22:31:02.313571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.457 [2024-07-14 22:31:02.319480] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.457 22:31:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.716 22:31:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.716 "name": "raid_bdev1", 00:23:03.716 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:03.716 "strip_size_kb": 0, 00:23:03.716 "state": "online", 00:23:03.716 "raid_level": "raid1", 00:23:03.716 "superblock": false, 00:23:03.716 "num_base_bdevs": 4, 00:23:03.716 "num_base_bdevs_discovered": 3, 00:23:03.716 "num_base_bdevs_operational": 3, 00:23:03.716 "base_bdevs_list": [ 00:23:03.716 { 00:23:03.716 "name": null, 00:23:03.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.716 "is_configured": false, 00:23:03.716 "data_offset": 0, 00:23:03.716 "data_size": 65536 00:23:03.716 }, 00:23:03.716 { 00:23:03.716 "name": "BaseBdev2", 00:23:03.716 "uuid": "f35dff07-9db3-40f3-8445-65a3b217b222", 00:23:03.716 "is_configured": true, 00:23:03.716 "data_offset": 0, 00:23:03.716 "data_size": 65536 00:23:03.716 }, 00:23:03.716 { 00:23:03.716 "name": "BaseBdev3", 00:23:03.716 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:03.716 "is_configured": true, 00:23:03.716 "data_offset": 0, 00:23:03.716 "data_size": 65536 00:23:03.716 }, 00:23:03.716 { 00:23:03.716 "name": "BaseBdev4", 00:23:03.716 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:03.716 "is_configured": true, 00:23:03.716 "data_offset": 0, 00:23:03.716 "data_size": 65536 00:23:03.716 } 00:23:03.716 ] 00:23:03.716 }' 00:23:03.716 22:31:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.716 22:31:02 -- common/autotest_common.sh@10 -- # set +x 00:23:04.283 22:31:03 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.541 [2024-07-14 22:31:03.434037] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:04.541 [2024-07-14 22:31:03.434143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.541 [2024-07-14 22:31:03.469699] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:04.541 22:31:03 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:04.541 [2024-07-14 22:31:03.471752] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.541 [2024-07-14 22:31:03.600773] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:04.541 [2024-07-14 22:31:03.601281] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:04.798 [2024-07-14 22:31:03.718465] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:04.798 [2024-07-14 22:31:03.719121] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:05.056 [2024-07-14 22:31:04.047968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:05.056 [2024-07-14 22:31:04.049104] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:05.313 [2024-07-14 22:31:04.282381] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:05.313 [2024-07-14 22:31:04.288667] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.572 22:31:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.572 [2024-07-14 22:31:04.615487] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:05.831 22:31:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:05.831 "name": "raid_bdev1", 00:23:05.831 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:05.831 "strip_size_kb": 0, 00:23:05.831 "state": "online", 00:23:05.831 "raid_level": "raid1", 00:23:05.832 "superblock": false, 00:23:05.832 "num_base_bdevs": 4, 00:23:05.832 "num_base_bdevs_discovered": 4, 00:23:05.832 "num_base_bdevs_operational": 4, 00:23:05.832 "process": { 00:23:05.832 "type": "rebuild", 00:23:05.832 "target": "spare", 00:23:05.832 "progress": { 00:23:05.832 "blocks": 14336, 00:23:05.832 "percent": 21 00:23:05.832 } 00:23:05.832 }, 00:23:05.832 "base_bdevs_list": [ 00:23:05.832 { 00:23:05.832 "name": "spare", 00:23:05.832 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:05.832 "is_configured": true, 00:23:05.832 "data_offset": 0, 00:23:05.832 "data_size": 65536 00:23:05.832 }, 00:23:05.832 { 00:23:05.832 "name": "BaseBdev2", 00:23:05.832 "uuid": "f35dff07-9db3-40f3-8445-65a3b217b222", 00:23:05.832 "is_configured": true, 00:23:05.832 "data_offset": 0, 00:23:05.832 "data_size": 65536 00:23:05.832 }, 00:23:05.832 { 00:23:05.832 "name": "BaseBdev3", 00:23:05.832 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:05.832 "is_configured": true, 00:23:05.832 "data_offset": 0, 00:23:05.832 "data_size": 65536 00:23:05.832 }, 00:23:05.832 { 00:23:05.832 "name": "BaseBdev4", 00:23:05.832 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:05.832 "is_configured": true, 00:23:05.832 "data_offset": 0, 00:23:05.832 "data_size": 65536 00:23:05.832 } 00:23:05.832 ] 00:23:05.832 }' 00:23:05.832 22:31:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:05.832 22:31:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.832 22:31:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:05.832 22:31:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.832 22:31:04 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:06.090 [2024-07-14 22:31:05.021509] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.090 [2024-07-14 22:31:05.135240] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:06.090 [2024-07-14 22:31:05.145284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.349 [2024-07-14 22:31:05.163592] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.349 22:31:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.608 22:31:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.608 "name": "raid_bdev1", 00:23:06.608 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:06.608 "strip_size_kb": 0, 00:23:06.608 "state": "online", 00:23:06.608 "raid_level": "raid1", 00:23:06.608 "superblock": false, 00:23:06.608 "num_base_bdevs": 4, 00:23:06.608 "num_base_bdevs_discovered": 3, 00:23:06.608 "num_base_bdevs_operational": 3, 00:23:06.608 "base_bdevs_list": [ 00:23:06.608 { 00:23:06.608 "name": null, 00:23:06.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.608 "is_configured": false, 00:23:06.608 "data_offset": 0, 00:23:06.608 "data_size": 65536 00:23:06.608 }, 00:23:06.608 { 00:23:06.608 "name": "BaseBdev2", 00:23:06.608 "uuid": "f35dff07-9db3-40f3-8445-65a3b217b222", 00:23:06.608 "is_configured": true, 00:23:06.608 "data_offset": 0, 00:23:06.608 "data_size": 65536 00:23:06.608 }, 00:23:06.608 { 00:23:06.608 "name": "BaseBdev3", 00:23:06.608 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:06.608 "is_configured": true, 00:23:06.608 "data_offset": 0, 00:23:06.608 "data_size": 65536 00:23:06.608 }, 00:23:06.608 { 00:23:06.608 "name": "BaseBdev4", 00:23:06.608 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:06.609 "is_configured": true, 00:23:06.609 "data_offset": 0, 00:23:06.609 "data_size": 65536 00:23:06.609 } 00:23:06.609 ] 00:23:06.609 }' 00:23:06.609 22:31:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.609 22:31:05 -- common/autotest_common.sh@10 -- # set +x 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.177 22:31:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.435 22:31:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:07.435 "name": "raid_bdev1", 00:23:07.435 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:07.435 "strip_size_kb": 0, 00:23:07.435 "state": "online", 00:23:07.435 "raid_level": "raid1", 00:23:07.435 "superblock": false, 00:23:07.435 "num_base_bdevs": 4, 00:23:07.435 "num_base_bdevs_discovered": 3, 00:23:07.435 "num_base_bdevs_operational": 3, 00:23:07.435 "base_bdevs_list": [ 00:23:07.435 { 00:23:07.435 "name": null, 00:23:07.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.435 "is_configured": false, 00:23:07.435 "data_offset": 0, 00:23:07.435 "data_size": 65536 00:23:07.435 }, 00:23:07.435 { 00:23:07.435 "name": "BaseBdev2", 00:23:07.435 "uuid": "f35dff07-9db3-40f3-8445-65a3b217b222", 00:23:07.435 "is_configured": true, 00:23:07.435 "data_offset": 0, 00:23:07.435 "data_size": 65536 00:23:07.435 }, 00:23:07.435 { 00:23:07.435 "name": "BaseBdev3", 00:23:07.435 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:07.435 "is_configured": true, 00:23:07.435 "data_offset": 0, 00:23:07.435 "data_size": 65536 00:23:07.435 }, 00:23:07.435 { 00:23:07.435 "name": "BaseBdev4", 00:23:07.435 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:07.435 "is_configured": true, 00:23:07.435 "data_offset": 0, 00:23:07.435 "data_size": 65536 00:23:07.435 } 00:23:07.435 ] 00:23:07.435 }' 00:23:07.435 22:31:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:07.435 22:31:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:07.435 22:31:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:07.435 22:31:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:07.436 22:31:06 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:07.694 [2024-07-14 22:31:06.672153] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:07.694 [2024-07-14 22:31:06.672213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:07.694 [2024-07-14 22:31:06.707459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:07.694 [2024-07-14 22:31:06.709426] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:07.694 22:31:06 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:07.952 [2024-07-14 22:31:06.825666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:07.952 [2024-07-14 22:31:06.826240] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:07.952 [2024-07-14 22:31:06.943291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:07.952 [2024-07-14 22:31:06.944003] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:08.245 [2024-07-14 22:31:07.283803] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:08.245 [2024-07-14 22:31:07.284972] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:08.518 [2024-07-14 22:31:07.486608] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:08.518 [2024-07-14 22:31:07.486827] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.777 22:31:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.777 [2024-07-14 22:31:07.750501] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:09.036 [2024-07-14 22:31:07.857930] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:09.037 [2024-07-14 22:31:07.858538] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:09.037 22:31:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.037 "name": "raid_bdev1", 00:23:09.037 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:09.037 "strip_size_kb": 0, 00:23:09.037 "state": "online", 00:23:09.037 "raid_level": "raid1", 00:23:09.037 "superblock": false, 00:23:09.037 "num_base_bdevs": 4, 00:23:09.037 "num_base_bdevs_discovered": 4, 00:23:09.037 "num_base_bdevs_operational": 4, 00:23:09.037 "process": { 00:23:09.037 "type": "rebuild", 00:23:09.037 "target": "spare", 00:23:09.037 "progress": { 00:23:09.037 "blocks": 16384, 00:23:09.037 "percent": 25 00:23:09.037 } 00:23:09.037 }, 00:23:09.037 "base_bdevs_list": [ 00:23:09.037 { 00:23:09.037 "name": "spare", 00:23:09.037 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:09.037 "is_configured": true, 00:23:09.037 "data_offset": 0, 00:23:09.037 "data_size": 65536 00:23:09.037 }, 00:23:09.037 { 00:23:09.037 "name": "BaseBdev2", 00:23:09.037 "uuid": "f35dff07-9db3-40f3-8445-65a3b217b222", 00:23:09.037 "is_configured": true, 00:23:09.037 "data_offset": 0, 00:23:09.037 "data_size": 65536 00:23:09.037 }, 00:23:09.037 { 00:23:09.037 "name": "BaseBdev3", 00:23:09.037 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:09.037 "is_configured": true, 00:23:09.037 "data_offset": 0, 00:23:09.037 "data_size": 65536 00:23:09.037 }, 00:23:09.037 { 00:23:09.037 "name": "BaseBdev4", 00:23:09.037 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:09.037 "is_configured": true, 00:23:09.037 "data_offset": 0, 00:23:09.037 "data_size": 65536 00:23:09.037 } 00:23:09.037 ] 00:23:09.037 }' 00:23:09.037 22:31:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:09.037 22:31:08 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:09.296 [2024-07-14 22:31:08.214010] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:09.296 [2024-07-14 22:31:08.318120] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:09.296 [2024-07-14 22:31:08.337524] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:09.296 [2024-07-14 22:31:08.345982] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:23:09.296 [2024-07-14 22:31:08.346017] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.555 22:31:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.555 "name": "raid_bdev1", 00:23:09.555 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:09.555 "strip_size_kb": 0, 00:23:09.555 "state": "online", 00:23:09.555 "raid_level": "raid1", 00:23:09.555 "superblock": false, 00:23:09.555 "num_base_bdevs": 4, 00:23:09.555 "num_base_bdevs_discovered": 3, 00:23:09.555 "num_base_bdevs_operational": 3, 00:23:09.555 "process": { 00:23:09.555 "type": "rebuild", 00:23:09.555 "target": "spare", 00:23:09.555 "progress": { 00:23:09.555 "blocks": 24576, 00:23:09.556 "percent": 37 00:23:09.556 } 00:23:09.556 }, 00:23:09.556 "base_bdevs_list": [ 00:23:09.556 { 00:23:09.556 "name": "spare", 00:23:09.556 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:09.556 "is_configured": true, 00:23:09.556 "data_offset": 0, 00:23:09.556 "data_size": 65536 00:23:09.556 }, 00:23:09.556 { 00:23:09.556 "name": null, 00:23:09.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.556 "is_configured": false, 00:23:09.556 "data_offset": 0, 00:23:09.556 "data_size": 65536 00:23:09.556 }, 00:23:09.556 { 00:23:09.556 "name": "BaseBdev3", 00:23:09.556 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:09.556 "is_configured": true, 00:23:09.556 "data_offset": 0, 00:23:09.556 "data_size": 65536 00:23:09.556 }, 00:23:09.556 { 00:23:09.556 "name": "BaseBdev4", 00:23:09.556 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:09.556 "is_configured": true, 00:23:09.556 "data_offset": 0, 00:23:09.556 "data_size": 65536 00:23:09.556 } 00:23:09.556 ] 00:23:09.556 }' 00:23:09.556 22:31:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.556 [2024-07-14 22:31:08.582798] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:09.556 [2024-07-14 22:31:08.583151] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:09.556 22:31:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.556 22:31:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@657 -- # local timeout=525 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.815 22:31:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.074 22:31:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:10.074 "name": "raid_bdev1", 00:23:10.074 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:10.074 "strip_size_kb": 0, 00:23:10.074 "state": "online", 00:23:10.074 "raid_level": "raid1", 00:23:10.074 "superblock": false, 00:23:10.074 "num_base_bdevs": 4, 00:23:10.074 "num_base_bdevs_discovered": 3, 00:23:10.074 "num_base_bdevs_operational": 3, 00:23:10.074 "process": { 00:23:10.074 "type": "rebuild", 00:23:10.074 "target": "spare", 00:23:10.074 "progress": { 00:23:10.074 "blocks": 28672, 00:23:10.074 "percent": 43 00:23:10.074 } 00:23:10.074 }, 00:23:10.074 "base_bdevs_list": [ 00:23:10.074 { 00:23:10.074 "name": "spare", 00:23:10.074 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:10.074 "is_configured": true, 00:23:10.074 "data_offset": 0, 00:23:10.074 "data_size": 65536 00:23:10.074 }, 00:23:10.074 { 00:23:10.074 "name": null, 00:23:10.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.074 "is_configured": false, 00:23:10.074 "data_offset": 0, 00:23:10.074 "data_size": 65536 00:23:10.074 }, 00:23:10.074 { 00:23:10.074 "name": "BaseBdev3", 00:23:10.074 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:10.074 "is_configured": true, 00:23:10.074 "data_offset": 0, 00:23:10.074 "data_size": 65536 00:23:10.074 }, 00:23:10.074 { 00:23:10.074 "name": "BaseBdev4", 00:23:10.074 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:10.074 "is_configured": true, 00:23:10.074 "data_offset": 0, 00:23:10.074 "data_size": 65536 00:23:10.074 } 00:23:10.074 ] 00:23:10.074 }' 00:23:10.074 22:31:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:10.074 22:31:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.074 22:31:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:10.074 22:31:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.074 22:31:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:10.642 [2024-07-14 22:31:09.472438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.211 [2024-07-14 22:31:10.229265] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:11.211 22:31:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:11.211 "name": "raid_bdev1", 00:23:11.211 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:11.211 "strip_size_kb": 0, 00:23:11.211 "state": "online", 00:23:11.211 "raid_level": "raid1", 00:23:11.211 "superblock": false, 00:23:11.211 "num_base_bdevs": 4, 00:23:11.212 "num_base_bdevs_discovered": 3, 00:23:11.212 "num_base_bdevs_operational": 3, 00:23:11.212 "process": { 00:23:11.212 "type": "rebuild", 00:23:11.212 "target": "spare", 00:23:11.212 "progress": { 00:23:11.212 "blocks": 53248, 00:23:11.212 "percent": 81 00:23:11.212 } 00:23:11.212 }, 00:23:11.212 "base_bdevs_list": [ 00:23:11.212 { 00:23:11.212 "name": "spare", 00:23:11.212 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:11.212 "is_configured": true, 00:23:11.212 "data_offset": 0, 00:23:11.212 "data_size": 65536 00:23:11.212 }, 00:23:11.212 { 00:23:11.212 "name": null, 00:23:11.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.212 "is_configured": false, 00:23:11.212 "data_offset": 0, 00:23:11.212 "data_size": 65536 00:23:11.212 }, 00:23:11.212 { 00:23:11.212 "name": "BaseBdev3", 00:23:11.212 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:11.212 "is_configured": true, 00:23:11.212 "data_offset": 0, 00:23:11.212 "data_size": 65536 00:23:11.212 }, 00:23:11.212 { 00:23:11.212 "name": "BaseBdev4", 00:23:11.212 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:11.212 "is_configured": true, 00:23:11.212 "data_offset": 0, 00:23:11.212 "data_size": 65536 00:23:11.212 } 00:23:11.212 ] 00:23:11.212 }' 00:23:11.212 22:31:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:11.471 22:31:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:11.471 22:31:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:11.471 22:31:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.471 22:31:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:11.731 [2024-07-14 22:31:10.552321] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:11.990 [2024-07-14 22:31:10.980822] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:12.249 [2024-07-14 22:31:11.086573] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:12.249 [2024-07-14 22:31:11.088431] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.508 22:31:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:12.766 "name": "raid_bdev1", 00:23:12.766 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:12.766 "strip_size_kb": 0, 00:23:12.766 "state": "online", 00:23:12.766 "raid_level": "raid1", 00:23:12.766 "superblock": false, 00:23:12.766 "num_base_bdevs": 4, 00:23:12.766 "num_base_bdevs_discovered": 3, 00:23:12.766 "num_base_bdevs_operational": 3, 00:23:12.766 "base_bdevs_list": [ 00:23:12.766 { 00:23:12.766 "name": "spare", 00:23:12.766 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:12.766 "is_configured": true, 00:23:12.766 "data_offset": 0, 00:23:12.766 "data_size": 65536 00:23:12.766 }, 00:23:12.766 { 00:23:12.766 "name": null, 00:23:12.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.766 "is_configured": false, 00:23:12.766 "data_offset": 0, 00:23:12.766 "data_size": 65536 00:23:12.766 }, 00:23:12.766 { 00:23:12.766 "name": "BaseBdev3", 00:23:12.766 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:12.766 "is_configured": true, 00:23:12.766 "data_offset": 0, 00:23:12.766 "data_size": 65536 00:23:12.766 }, 00:23:12.766 { 00:23:12.766 "name": "BaseBdev4", 00:23:12.766 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:12.766 "is_configured": true, 00:23:12.766 "data_offset": 0, 00:23:12.766 "data_size": 65536 00:23:12.766 } 00:23:12.766 ] 00:23:12.766 }' 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@660 -- # break 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.766 22:31:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.025 22:31:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:13.025 "name": "raid_bdev1", 00:23:13.025 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:13.025 "strip_size_kb": 0, 00:23:13.025 "state": "online", 00:23:13.025 "raid_level": "raid1", 00:23:13.025 "superblock": false, 00:23:13.025 "num_base_bdevs": 4, 00:23:13.025 "num_base_bdevs_discovered": 3, 00:23:13.025 "num_base_bdevs_operational": 3, 00:23:13.025 "base_bdevs_list": [ 00:23:13.025 { 00:23:13.025 "name": "spare", 00:23:13.025 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:13.025 "is_configured": true, 00:23:13.025 "data_offset": 0, 00:23:13.025 "data_size": 65536 00:23:13.025 }, 00:23:13.025 { 00:23:13.025 "name": null, 00:23:13.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.025 "is_configured": false, 00:23:13.025 "data_offset": 0, 00:23:13.025 "data_size": 65536 00:23:13.025 }, 00:23:13.025 { 00:23:13.025 "name": "BaseBdev3", 00:23:13.025 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:13.025 "is_configured": true, 00:23:13.025 "data_offset": 0, 00:23:13.025 "data_size": 65536 00:23:13.025 }, 00:23:13.025 { 00:23:13.025 "name": "BaseBdev4", 00:23:13.025 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:13.025 "is_configured": true, 00:23:13.025 "data_offset": 0, 00:23:13.025 "data_size": 65536 00:23:13.025 } 00:23:13.025 ] 00:23:13.025 }' 00:23:13.025 22:31:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:13.025 22:31:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:13.025 22:31:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.284 "name": "raid_bdev1", 00:23:13.284 "uuid": "75970ae3-99fa-45d5-aac4-6b6d728735c8", 00:23:13.284 "strip_size_kb": 0, 00:23:13.284 "state": "online", 00:23:13.284 "raid_level": "raid1", 00:23:13.284 "superblock": false, 00:23:13.284 "num_base_bdevs": 4, 00:23:13.284 "num_base_bdevs_discovered": 3, 00:23:13.284 "num_base_bdevs_operational": 3, 00:23:13.284 "base_bdevs_list": [ 00:23:13.284 { 00:23:13.284 "name": "spare", 00:23:13.284 "uuid": "d9a02886-7c63-5949-bfe6-423c83b2f529", 00:23:13.284 "is_configured": true, 00:23:13.284 "data_offset": 0, 00:23:13.284 "data_size": 65536 00:23:13.284 }, 00:23:13.284 { 00:23:13.284 "name": null, 00:23:13.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.284 "is_configured": false, 00:23:13.284 "data_offset": 0, 00:23:13.284 "data_size": 65536 00:23:13.284 }, 00:23:13.284 { 00:23:13.284 "name": "BaseBdev3", 00:23:13.284 "uuid": "d81db060-35ff-4626-97fa-e1ea26000fab", 00:23:13.284 "is_configured": true, 00:23:13.284 "data_offset": 0, 00:23:13.284 "data_size": 65536 00:23:13.284 }, 00:23:13.284 { 00:23:13.284 "name": "BaseBdev4", 00:23:13.284 "uuid": "fd18ff52-a8c6-4ae2-8442-cd5b4e508bee", 00:23:13.284 "is_configured": true, 00:23:13.284 "data_offset": 0, 00:23:13.284 "data_size": 65536 00:23:13.284 } 00:23:13.284 ] 00:23:13.284 }' 00:23:13.284 22:31:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.284 22:31:12 -- common/autotest_common.sh@10 -- # set +x 00:23:14.221 22:31:12 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:14.221 [2024-07-14 22:31:13.218878] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.221 [2024-07-14 22:31:13.218916] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.221 00:23:14.221 Latency(us) 00:23:14.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.221 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:14.221 raid_bdev1 : 11.00 113.11 339.33 0.00 0.00 12428.06 301.61 109623.85 00:23:14.221 =================================================================================================================== 00:23:14.221 Total : 113.11 339.33 0.00 0.00 12428.06 301.61 109623.85 00:23:14.221 [2024-07-14 22:31:13.256885] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.221 [2024-07-14 22:31:13.256952] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.221 0 00:23:14.221 [2024-07-14 22:31:13.257064] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.221 [2024-07-14 22:31:13.257082] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:23:14.221 22:31:13 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.221 22:31:13 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:14.479 22:31:13 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:14.479 22:31:13 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:14.479 22:31:13 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@12 -- # local i 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:14.479 22:31:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:14.737 /dev/nbd0 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:14.995 22:31:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:14.995 22:31:13 -- common/autotest_common.sh@857 -- # local i 00:23:14.995 22:31:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:14.995 22:31:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:14.995 22:31:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:14.995 22:31:13 -- common/autotest_common.sh@861 -- # break 00:23:14.995 22:31:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:14.995 22:31:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:14.995 22:31:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:14.995 1+0 records in 00:23:14.995 1+0 records out 00:23:14.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428211 s, 9.6 MB/s 00:23:14.995 22:31:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.995 22:31:13 -- common/autotest_common.sh@874 -- # size=4096 00:23:14.995 22:31:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.995 22:31:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:14.995 22:31:13 -- common/autotest_common.sh@877 -- # return 0 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:14.995 22:31:13 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:14.995 22:31:13 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:14.995 22:31:13 -- bdev/bdev_raid.sh@678 -- # continue 00:23:14.995 22:31:13 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:14.995 22:31:13 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:14.995 22:31:13 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@12 -- # local i 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:14.995 22:31:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:15.253 /dev/nbd1 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:15.253 22:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:15.253 22:31:14 -- common/autotest_common.sh@857 -- # local i 00:23:15.253 22:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:15.253 22:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:15.253 22:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:15.253 22:31:14 -- common/autotest_common.sh@861 -- # break 00:23:15.253 22:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:15.253 22:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:15.253 22:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:15.253 1+0 records in 00:23:15.253 1+0 records out 00:23:15.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349245 s, 11.7 MB/s 00:23:15.253 22:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.253 22:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:23:15.253 22:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.253 22:31:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:15.253 22:31:14 -- common/autotest_common.sh@877 -- # return 0 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:15.253 22:31:14 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:15.253 22:31:14 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@51 -- # local i 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:15.253 22:31:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:15.511 22:31:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@41 -- # break 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@45 -- # return 0 00:23:15.768 22:31:14 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:15.768 22:31:14 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:15.768 22:31:14 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@12 -- # local i 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:15.768 22:31:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:16.026 /dev/nbd1 00:23:16.026 22:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:16.026 22:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:16.026 22:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:16.026 22:31:14 -- common/autotest_common.sh@857 -- # local i 00:23:16.026 22:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:16.026 22:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:16.026 22:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:16.026 22:31:14 -- common/autotest_common.sh@861 -- # break 00:23:16.026 22:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:16.026 22:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:16.026 22:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.026 1+0 records in 00:23:16.026 1+0 records out 00:23:16.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522577 s, 7.8 MB/s 00:23:16.026 22:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.026 22:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:23:16.026 22:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.026 22:31:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:16.026 22:31:14 -- common/autotest_common.sh@877 -- # return 0 00:23:16.026 22:31:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.026 22:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:16.026 22:31:14 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:16.026 22:31:15 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:16.026 22:31:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:16.026 22:31:15 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:16.026 22:31:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:16.026 22:31:15 -- bdev/nbd_common.sh@51 -- # local i 00:23:16.026 22:31:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:16.026 22:31:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:16.285 22:31:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@41 -- # break 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@45 -- # return 0 00:23:16.544 22:31:15 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@51 -- # local i 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:16.544 22:31:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@41 -- # break 00:23:16.802 22:31:15 -- bdev/nbd_common.sh@45 -- # return 0 00:23:16.802 22:31:15 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:16.802 22:31:15 -- bdev/bdev_raid.sh@709 -- # killprocess 129269 00:23:16.802 22:31:15 -- common/autotest_common.sh@926 -- # '[' -z 129269 ']' 00:23:16.802 22:31:15 -- common/autotest_common.sh@930 -- # kill -0 129269 00:23:16.802 22:31:15 -- common/autotest_common.sh@931 -- # uname 00:23:16.802 22:31:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:16.802 22:31:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129269 00:23:16.802 22:31:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:16.802 22:31:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:16.802 22:31:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129269' 00:23:16.802 killing process with pid 129269 00:23:16.802 22:31:15 -- common/autotest_common.sh@945 -- # kill 129269 00:23:16.802 22:31:15 -- common/autotest_common.sh@950 -- # wait 129269 00:23:16.802 Received shutdown signal, test time was about 13.586920 seconds 00:23:16.802 00:23:16.802 Latency(us) 00:23:16.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.802 =================================================================================================================== 00:23:16.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.802 [2024-07-14 22:31:15.827543] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.061 [2024-07-14 22:31:16.098918] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:17.997 ************************************ 00:23:17.997 END TEST raid_rebuild_test_io 00:23:17.997 ************************************ 00:23:17.997 22:31:17 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:17.997 00:23:17.997 real 0m18.942s 00:23:17.997 user 0m29.382s 00:23:17.997 sys 0m2.368s 00:23:17.997 22:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.997 22:31:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:23:18.256 22:31:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:18.256 22:31:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:18.256 22:31:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.256 ************************************ 00:23:18.256 START TEST raid_rebuild_test_sb_io 00:23:18.256 ************************************ 00:23:18.256 22:31:17 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@544 -- # raid_pid=129820 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:18.256 22:31:17 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129820 /var/tmp/spdk-raid.sock 00:23:18.256 22:31:17 -- common/autotest_common.sh@819 -- # '[' -z 129820 ']' 00:23:18.256 22:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.256 22:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:18.256 22:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.256 22:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:18.256 22:31:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.256 [2024-07-14 22:31:17.168211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:18.256 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:18.256 Zero copy mechanism will not be used. 00:23:18.256 [2024-07-14 22:31:17.168392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129820 ] 00:23:18.256 [2024-07-14 22:31:17.312439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.515 [2024-07-14 22:31:17.467904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.773 [2024-07-14 22:31:17.632107] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.031 22:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:19.031 22:31:18 -- common/autotest_common.sh@852 -- # return 0 00:23:19.031 22:31:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:19.031 22:31:18 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:19.031 22:31:18 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:19.289 BaseBdev1_malloc 00:23:19.289 22:31:18 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:19.546 [2024-07-14 22:31:18.489945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:19.546 [2024-07-14 22:31:18.490031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.546 [2024-07-14 22:31:18.490063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:19.546 [2024-07-14 22:31:18.490111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.546 [2024-07-14 22:31:18.492249] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.546 [2024-07-14 22:31:18.492318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:19.546 BaseBdev1 00:23:19.546 22:31:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:19.546 22:31:18 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:19.546 22:31:18 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:19.803 BaseBdev2_malloc 00:23:19.803 22:31:18 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:20.061 [2024-07-14 22:31:19.014181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:20.061 [2024-07-14 22:31:19.014249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.061 [2024-07-14 22:31:19.014290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:20.061 [2024-07-14 22:31:19.014339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.061 [2024-07-14 22:31:19.016360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.061 [2024-07-14 22:31:19.016411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:20.061 BaseBdev2 00:23:20.061 22:31:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:20.061 22:31:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:20.061 22:31:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:20.318 BaseBdev3_malloc 00:23:20.318 22:31:19 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:20.574 [2024-07-14 22:31:19.486943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:20.574 [2024-07-14 22:31:19.487019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.574 [2024-07-14 22:31:19.487058] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:20.575 [2024-07-14 22:31:19.487099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.575 [2024-07-14 22:31:19.489073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.575 [2024-07-14 22:31:19.489130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:20.575 BaseBdev3 00:23:20.575 22:31:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:20.575 22:31:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:20.575 22:31:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:20.833 BaseBdev4_malloc 00:23:20.833 22:31:19 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:21.096 [2024-07-14 22:31:19.911643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:21.096 [2024-07-14 22:31:19.911741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.096 [2024-07-14 22:31:19.911788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:21.096 [2024-07-14 22:31:19.911834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.096 [2024-07-14 22:31:19.913944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.096 [2024-07-14 22:31:19.914005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:21.096 BaseBdev4 00:23:21.096 22:31:19 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:21.353 spare_malloc 00:23:21.353 22:31:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:21.353 spare_delay 00:23:21.353 22:31:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:21.610 [2024-07-14 22:31:20.644171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:21.610 [2024-07-14 22:31:20.644243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.610 [2024-07-14 22:31:20.644275] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:21.610 [2024-07-14 22:31:20.644331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.610 [2024-07-14 22:31:20.646466] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.610 [2024-07-14 22:31:20.646540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:21.610 spare 00:23:21.610 22:31:20 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:21.868 [2024-07-14 22:31:20.836277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.868 [2024-07-14 22:31:20.838018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:21.868 [2024-07-14 22:31:20.838110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.868 [2024-07-14 22:31:20.838168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.868 [2024-07-14 22:31:20.838403] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:23:21.868 [2024-07-14 22:31:20.838428] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:21.868 [2024-07-14 22:31:20.838532] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:21.868 [2024-07-14 22:31:20.838892] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:23:21.868 [2024-07-14 22:31:20.838915] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:23:21.868 [2024-07-14 22:31:20.839059] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.868 22:31:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.127 22:31:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.127 "name": "raid_bdev1", 00:23:22.127 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:22.127 "strip_size_kb": 0, 00:23:22.127 "state": "online", 00:23:22.127 "raid_level": "raid1", 00:23:22.127 "superblock": true, 00:23:22.127 "num_base_bdevs": 4, 00:23:22.127 "num_base_bdevs_discovered": 4, 00:23:22.127 "num_base_bdevs_operational": 4, 00:23:22.127 "base_bdevs_list": [ 00:23:22.127 { 00:23:22.127 "name": "BaseBdev1", 00:23:22.127 "uuid": "37e77417-b069-595b-b78e-221402e39417", 00:23:22.127 "is_configured": true, 00:23:22.127 "data_offset": 2048, 00:23:22.127 "data_size": 63488 00:23:22.127 }, 00:23:22.127 { 00:23:22.127 "name": "BaseBdev2", 00:23:22.127 "uuid": "79d663c9-48a1-5daa-9a33-95a8636f45cc", 00:23:22.127 "is_configured": true, 00:23:22.127 "data_offset": 2048, 00:23:22.127 "data_size": 63488 00:23:22.127 }, 00:23:22.127 { 00:23:22.127 "name": "BaseBdev3", 00:23:22.127 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:22.127 "is_configured": true, 00:23:22.127 "data_offset": 2048, 00:23:22.127 "data_size": 63488 00:23:22.127 }, 00:23:22.127 { 00:23:22.127 "name": "BaseBdev4", 00:23:22.127 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:22.127 "is_configured": true, 00:23:22.127 "data_offset": 2048, 00:23:22.127 "data_size": 63488 00:23:22.127 } 00:23:22.127 ] 00:23:22.127 }' 00:23:22.127 22:31:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.127 22:31:21 -- common/autotest_common.sh@10 -- # set +x 00:23:22.694 22:31:21 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:22.694 22:31:21 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:22.952 [2024-07-14 22:31:21.960598] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.952 22:31:21 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:22.952 22:31:21 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:22.952 22:31:21 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.211 22:31:22 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:23.211 22:31:22 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:23.211 22:31:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:23.211 22:31:22 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:23.211 [2024-07-14 22:31:22.238884] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:23.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:23.211 Zero copy mechanism will not be used. 00:23:23.211 Running I/O for 60 seconds... 00:23:23.470 [2024-07-14 22:31:22.395300] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.470 [2024-07-14 22:31:22.407136] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.470 22:31:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.728 22:31:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.728 "name": "raid_bdev1", 00:23:23.728 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:23.728 "strip_size_kb": 0, 00:23:23.728 "state": "online", 00:23:23.728 "raid_level": "raid1", 00:23:23.728 "superblock": true, 00:23:23.728 "num_base_bdevs": 4, 00:23:23.728 "num_base_bdevs_discovered": 3, 00:23:23.728 "num_base_bdevs_operational": 3, 00:23:23.728 "base_bdevs_list": [ 00:23:23.728 { 00:23:23.728 "name": null, 00:23:23.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.728 "is_configured": false, 00:23:23.728 "data_offset": 2048, 00:23:23.728 "data_size": 63488 00:23:23.728 }, 00:23:23.728 { 00:23:23.728 "name": "BaseBdev2", 00:23:23.728 "uuid": "79d663c9-48a1-5daa-9a33-95a8636f45cc", 00:23:23.728 "is_configured": true, 00:23:23.728 "data_offset": 2048, 00:23:23.728 "data_size": 63488 00:23:23.728 }, 00:23:23.728 { 00:23:23.728 "name": "BaseBdev3", 00:23:23.728 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:23.728 "is_configured": true, 00:23:23.728 "data_offset": 2048, 00:23:23.728 "data_size": 63488 00:23:23.728 }, 00:23:23.728 { 00:23:23.728 "name": "BaseBdev4", 00:23:23.728 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:23.728 "is_configured": true, 00:23:23.728 "data_offset": 2048, 00:23:23.728 "data_size": 63488 00:23:23.728 } 00:23:23.728 ] 00:23:23.728 }' 00:23:23.728 22:31:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.728 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:23:24.294 22:31:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:24.551 [2024-07-14 22:31:23.494151] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:24.551 [2024-07-14 22:31:23.494220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:24.551 [2024-07-14 22:31:23.540563] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:24.551 [2024-07-14 22:31:23.542421] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:24.551 22:31:23 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:24.808 [2024-07-14 22:31:23.657993] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:24.808 [2024-07-14 22:31:23.659189] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:24.808 [2024-07-14 22:31:23.862594] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:24.808 [2024-07-14 22:31:23.862913] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:25.372 [2024-07-14 22:31:24.190619] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:25.372 [2024-07-14 22:31:24.191100] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:25.372 [2024-07-14 22:31:24.321090] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:25.372 [2024-07-14 22:31:24.321754] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.630 22:31:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.630 [2024-07-14 22:31:24.685940] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:25.888 22:31:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:25.888 "name": "raid_bdev1", 00:23:25.888 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:25.888 "strip_size_kb": 0, 00:23:25.888 "state": "online", 00:23:25.888 "raid_level": "raid1", 00:23:25.888 "superblock": true, 00:23:25.888 "num_base_bdevs": 4, 00:23:25.888 "num_base_bdevs_discovered": 4, 00:23:25.888 "num_base_bdevs_operational": 4, 00:23:25.888 "process": { 00:23:25.888 "type": "rebuild", 00:23:25.888 "target": "spare", 00:23:25.888 "progress": { 00:23:25.888 "blocks": 14336, 00:23:25.888 "percent": 22 00:23:25.888 } 00:23:25.888 }, 00:23:25.888 "base_bdevs_list": [ 00:23:25.888 { 00:23:25.888 "name": "spare", 00:23:25.888 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:25.888 "is_configured": true, 00:23:25.888 "data_offset": 2048, 00:23:25.888 "data_size": 63488 00:23:25.888 }, 00:23:25.888 { 00:23:25.888 "name": "BaseBdev2", 00:23:25.888 "uuid": "79d663c9-48a1-5daa-9a33-95a8636f45cc", 00:23:25.888 "is_configured": true, 00:23:25.888 "data_offset": 2048, 00:23:25.888 "data_size": 63488 00:23:25.888 }, 00:23:25.888 { 00:23:25.888 "name": "BaseBdev3", 00:23:25.888 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:25.888 "is_configured": true, 00:23:25.888 "data_offset": 2048, 00:23:25.888 "data_size": 63488 00:23:25.888 }, 00:23:25.888 { 00:23:25.888 "name": "BaseBdev4", 00:23:25.888 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:25.888 "is_configured": true, 00:23:25.888 "data_offset": 2048, 00:23:25.888 "data_size": 63488 00:23:25.888 } 00:23:25.888 ] 00:23:25.888 }' 00:23:25.888 22:31:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:25.888 22:31:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:25.888 22:31:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:25.888 22:31:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:25.888 22:31:24 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:26.147 [2024-07-14 22:31:25.065427] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:26.147 [2024-07-14 22:31:25.065888] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:26.147 [2024-07-14 22:31:25.151107] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:26.406 [2024-07-14 22:31:25.281288] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:26.406 [2024-07-14 22:31:25.298139] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.406 [2024-07-14 22:31:25.325163] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.406 22:31:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.665 22:31:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.665 "name": "raid_bdev1", 00:23:26.665 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:26.665 "strip_size_kb": 0, 00:23:26.665 "state": "online", 00:23:26.665 "raid_level": "raid1", 00:23:26.665 "superblock": true, 00:23:26.665 "num_base_bdevs": 4, 00:23:26.665 "num_base_bdevs_discovered": 3, 00:23:26.665 "num_base_bdevs_operational": 3, 00:23:26.665 "base_bdevs_list": [ 00:23:26.665 { 00:23:26.665 "name": null, 00:23:26.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.665 "is_configured": false, 00:23:26.665 "data_offset": 2048, 00:23:26.665 "data_size": 63488 00:23:26.665 }, 00:23:26.665 { 00:23:26.665 "name": "BaseBdev2", 00:23:26.665 "uuid": "79d663c9-48a1-5daa-9a33-95a8636f45cc", 00:23:26.665 "is_configured": true, 00:23:26.665 "data_offset": 2048, 00:23:26.665 "data_size": 63488 00:23:26.665 }, 00:23:26.665 { 00:23:26.665 "name": "BaseBdev3", 00:23:26.665 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:26.665 "is_configured": true, 00:23:26.665 "data_offset": 2048, 00:23:26.665 "data_size": 63488 00:23:26.665 }, 00:23:26.665 { 00:23:26.665 "name": "BaseBdev4", 00:23:26.665 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:26.665 "is_configured": true, 00:23:26.665 "data_offset": 2048, 00:23:26.665 "data_size": 63488 00:23:26.665 } 00:23:26.665 ] 00:23:26.665 }' 00:23:26.665 22:31:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.665 22:31:25 -- common/autotest_common.sh@10 -- # set +x 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.231 22:31:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.798 22:31:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:27.798 "name": "raid_bdev1", 00:23:27.798 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:27.798 "strip_size_kb": 0, 00:23:27.798 "state": "online", 00:23:27.798 "raid_level": "raid1", 00:23:27.798 "superblock": true, 00:23:27.798 "num_base_bdevs": 4, 00:23:27.798 "num_base_bdevs_discovered": 3, 00:23:27.798 "num_base_bdevs_operational": 3, 00:23:27.798 "base_bdevs_list": [ 00:23:27.798 { 00:23:27.798 "name": null, 00:23:27.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.798 "is_configured": false, 00:23:27.798 "data_offset": 2048, 00:23:27.798 "data_size": 63488 00:23:27.798 }, 00:23:27.798 { 00:23:27.798 "name": "BaseBdev2", 00:23:27.798 "uuid": "79d663c9-48a1-5daa-9a33-95a8636f45cc", 00:23:27.798 "is_configured": true, 00:23:27.798 "data_offset": 2048, 00:23:27.798 "data_size": 63488 00:23:27.798 }, 00:23:27.798 { 00:23:27.798 "name": "BaseBdev3", 00:23:27.798 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:27.798 "is_configured": true, 00:23:27.798 "data_offset": 2048, 00:23:27.798 "data_size": 63488 00:23:27.798 }, 00:23:27.798 { 00:23:27.798 "name": "BaseBdev4", 00:23:27.798 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:27.798 "is_configured": true, 00:23:27.798 "data_offset": 2048, 00:23:27.798 "data_size": 63488 00:23:27.798 } 00:23:27.798 ] 00:23:27.798 }' 00:23:27.798 22:31:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:27.798 22:31:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:27.798 22:31:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:27.798 22:31:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:27.798 22:31:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.798 [2024-07-14 22:31:26.851835] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:27.798 [2024-07-14 22:31:26.851892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:28.056 22:31:26 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:28.056 [2024-07-14 22:31:26.912777] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:28.056 [2024-07-14 22:31:26.914582] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:28.056 [2024-07-14 22:31:27.034482] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:28.314 [2024-07-14 22:31:27.157217] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:28.314 [2024-07-14 22:31:27.157940] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:28.571 [2024-07-14 22:31:27.497968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:28.571 [2024-07-14 22:31:27.634774] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:28.571 [2024-07-14 22:31:27.635085] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.850 22:31:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.108 [2024-07-14 22:31:28.003278] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:29.108 22:31:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:29.108 "name": "raid_bdev1", 00:23:29.108 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:29.108 "strip_size_kb": 0, 00:23:29.108 "state": "online", 00:23:29.108 "raid_level": "raid1", 00:23:29.108 "superblock": true, 00:23:29.108 "num_base_bdevs": 4, 00:23:29.108 "num_base_bdevs_discovered": 4, 00:23:29.108 "num_base_bdevs_operational": 4, 00:23:29.108 "process": { 00:23:29.108 "type": "rebuild", 00:23:29.108 "target": "spare", 00:23:29.108 "progress": { 00:23:29.108 "blocks": 16384, 00:23:29.108 "percent": 25 00:23:29.108 } 00:23:29.108 }, 00:23:29.108 "base_bdevs_list": [ 00:23:29.108 { 00:23:29.108 "name": "spare", 00:23:29.108 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:29.108 "is_configured": true, 00:23:29.108 "data_offset": 2048, 00:23:29.108 "data_size": 63488 00:23:29.108 }, 00:23:29.108 { 00:23:29.108 "name": "BaseBdev2", 00:23:29.108 "uuid": "79d663c9-48a1-5daa-9a33-95a8636f45cc", 00:23:29.108 "is_configured": true, 00:23:29.108 "data_offset": 2048, 00:23:29.108 "data_size": 63488 00:23:29.108 }, 00:23:29.108 { 00:23:29.108 "name": "BaseBdev3", 00:23:29.108 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:29.108 "is_configured": true, 00:23:29.108 "data_offset": 2048, 00:23:29.108 "data_size": 63488 00:23:29.108 }, 00:23:29.108 { 00:23:29.108 "name": "BaseBdev4", 00:23:29.108 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:29.108 "is_configured": true, 00:23:29.108 "data_offset": 2048, 00:23:29.108 "data_size": 63488 00:23:29.108 } 00:23:29.108 ] 00:23:29.108 }' 00:23:29.109 22:31:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:29.366 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:29.366 22:31:28 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:29.366 [2024-07-14 22:31:28.361719] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:29.624 [2024-07-14 22:31:28.505684] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:29.624 [2024-07-14 22:31:28.577602] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:29.624 [2024-07-14 22:31:28.577781] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:29.624 [2024-07-14 22:31:28.584310] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:23:29.624 [2024-07-14 22:31:28.584341] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:23:29.624 [2024-07-14 22:31:28.586790] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.882 22:31:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.882 [2024-07-14 22:31:28.870497] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:30.141 22:31:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:30.141 "name": "raid_bdev1", 00:23:30.141 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:30.141 "strip_size_kb": 0, 00:23:30.141 "state": "online", 00:23:30.141 "raid_level": "raid1", 00:23:30.141 "superblock": true, 00:23:30.141 "num_base_bdevs": 4, 00:23:30.141 "num_base_bdevs_discovered": 3, 00:23:30.141 "num_base_bdevs_operational": 3, 00:23:30.141 "process": { 00:23:30.141 "type": "rebuild", 00:23:30.141 "target": "spare", 00:23:30.141 "progress": { 00:23:30.141 "blocks": 26624, 00:23:30.141 "percent": 41 00:23:30.141 } 00:23:30.141 }, 00:23:30.141 "base_bdevs_list": [ 00:23:30.141 { 00:23:30.141 "name": "spare", 00:23:30.141 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:30.141 "is_configured": true, 00:23:30.141 "data_offset": 2048, 00:23:30.141 "data_size": 63488 00:23:30.141 }, 00:23:30.141 { 00:23:30.141 "name": null, 00:23:30.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.141 "is_configured": false, 00:23:30.141 "data_offset": 2048, 00:23:30.141 "data_size": 63488 00:23:30.141 }, 00:23:30.141 { 00:23:30.141 "name": "BaseBdev3", 00:23:30.141 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:30.141 "is_configured": true, 00:23:30.141 "data_offset": 2048, 00:23:30.141 "data_size": 63488 00:23:30.141 }, 00:23:30.141 { 00:23:30.141 "name": "BaseBdev4", 00:23:30.141 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:30.141 "is_configured": true, 00:23:30.141 "data_offset": 2048, 00:23:30.141 "data_size": 63488 00:23:30.141 } 00:23:30.141 ] 00:23:30.141 }' 00:23:30.141 22:31:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@657 -- # local timeout=546 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.141 22:31:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.141 [2024-07-14 22:31:29.107745] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:30.399 22:31:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:30.399 "name": "raid_bdev1", 00:23:30.399 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:30.399 "strip_size_kb": 0, 00:23:30.399 "state": "online", 00:23:30.399 "raid_level": "raid1", 00:23:30.399 "superblock": true, 00:23:30.399 "num_base_bdevs": 4, 00:23:30.399 "num_base_bdevs_discovered": 3, 00:23:30.399 "num_base_bdevs_operational": 3, 00:23:30.399 "process": { 00:23:30.399 "type": "rebuild", 00:23:30.399 "target": "spare", 00:23:30.399 "progress": { 00:23:30.399 "blocks": 30720, 00:23:30.399 "percent": 48 00:23:30.399 } 00:23:30.399 }, 00:23:30.399 "base_bdevs_list": [ 00:23:30.399 { 00:23:30.399 "name": "spare", 00:23:30.399 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:30.399 "is_configured": true, 00:23:30.399 "data_offset": 2048, 00:23:30.399 "data_size": 63488 00:23:30.399 }, 00:23:30.399 { 00:23:30.399 "name": null, 00:23:30.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.399 "is_configured": false, 00:23:30.399 "data_offset": 2048, 00:23:30.399 "data_size": 63488 00:23:30.399 }, 00:23:30.399 { 00:23:30.399 "name": "BaseBdev3", 00:23:30.399 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:30.399 "is_configured": true, 00:23:30.399 "data_offset": 2048, 00:23:30.399 "data_size": 63488 00:23:30.399 }, 00:23:30.399 { 00:23:30.399 "name": "BaseBdev4", 00:23:30.399 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:30.399 "is_configured": true, 00:23:30.399 "data_offset": 2048, 00:23:30.399 "data_size": 63488 00:23:30.399 } 00:23:30.399 ] 00:23:30.399 }' 00:23:30.399 22:31:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:30.399 22:31:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.399 22:31:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:30.399 [2024-07-14 22:31:29.417309] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:30.399 22:31:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.399 22:31:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:30.966 [2024-07-14 22:31:29.728740] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:30.966 [2024-07-14 22:31:29.834440] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:31.223 [2024-07-14 22:31:30.150244] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:31.223 [2024-07-14 22:31:30.251293] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.481 22:31:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.481 [2024-07-14 22:31:30.472901] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:31.739 22:31:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.739 "name": "raid_bdev1", 00:23:31.739 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:31.739 "strip_size_kb": 0, 00:23:31.739 "state": "online", 00:23:31.739 "raid_level": "raid1", 00:23:31.739 "superblock": true, 00:23:31.739 "num_base_bdevs": 4, 00:23:31.739 "num_base_bdevs_discovered": 3, 00:23:31.739 "num_base_bdevs_operational": 3, 00:23:31.739 "process": { 00:23:31.739 "type": "rebuild", 00:23:31.739 "target": "spare", 00:23:31.739 "progress": { 00:23:31.739 "blocks": 51200, 00:23:31.739 "percent": 80 00:23:31.739 } 00:23:31.739 }, 00:23:31.739 "base_bdevs_list": [ 00:23:31.739 { 00:23:31.739 "name": "spare", 00:23:31.739 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:31.739 "is_configured": true, 00:23:31.739 "data_offset": 2048, 00:23:31.739 "data_size": 63488 00:23:31.739 }, 00:23:31.739 { 00:23:31.739 "name": null, 00:23:31.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.739 "is_configured": false, 00:23:31.739 "data_offset": 2048, 00:23:31.739 "data_size": 63488 00:23:31.739 }, 00:23:31.739 { 00:23:31.739 "name": "BaseBdev3", 00:23:31.739 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:31.739 "is_configured": true, 00:23:31.739 "data_offset": 2048, 00:23:31.739 "data_size": 63488 00:23:31.739 }, 00:23:31.739 { 00:23:31.739 "name": "BaseBdev4", 00:23:31.739 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:31.739 "is_configured": true, 00:23:31.739 "data_offset": 2048, 00:23:31.739 "data_size": 63488 00:23:31.739 } 00:23:31.739 ] 00:23:31.739 }' 00:23:31.739 22:31:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.739 [2024-07-14 22:31:30.682084] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:31.740 22:31:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.740 22:31:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.740 22:31:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.740 22:31:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:31.997 [2024-07-14 22:31:30.897256] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:32.255 [2024-07-14 22:31:31.227621] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:32.513 [2024-07-14 22:31:31.327611] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:32.513 [2024-07-14 22:31:31.330058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.771 22:31:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.034 22:31:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:33.034 "name": "raid_bdev1", 00:23:33.034 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:33.034 "strip_size_kb": 0, 00:23:33.034 "state": "online", 00:23:33.034 "raid_level": "raid1", 00:23:33.034 "superblock": true, 00:23:33.034 "num_base_bdevs": 4, 00:23:33.034 "num_base_bdevs_discovered": 3, 00:23:33.034 "num_base_bdevs_operational": 3, 00:23:33.034 "base_bdevs_list": [ 00:23:33.034 { 00:23:33.034 "name": "spare", 00:23:33.034 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:33.034 "is_configured": true, 00:23:33.034 "data_offset": 2048, 00:23:33.034 "data_size": 63488 00:23:33.034 }, 00:23:33.034 { 00:23:33.034 "name": null, 00:23:33.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.034 "is_configured": false, 00:23:33.034 "data_offset": 2048, 00:23:33.034 "data_size": 63488 00:23:33.034 }, 00:23:33.034 { 00:23:33.034 "name": "BaseBdev3", 00:23:33.034 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:33.034 "is_configured": true, 00:23:33.034 "data_offset": 2048, 00:23:33.034 "data_size": 63488 00:23:33.034 }, 00:23:33.034 { 00:23:33.034 "name": "BaseBdev4", 00:23:33.034 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:33.034 "is_configured": true, 00:23:33.034 "data_offset": 2048, 00:23:33.034 "data_size": 63488 00:23:33.034 } 00:23:33.034 ] 00:23:33.034 }' 00:23:33.034 22:31:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.034 22:31:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:33.034 22:31:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.302 22:31:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@660 -- # break 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.303 22:31:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.560 22:31:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:33.560 "name": "raid_bdev1", 00:23:33.560 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:33.560 "strip_size_kb": 0, 00:23:33.560 "state": "online", 00:23:33.560 "raid_level": "raid1", 00:23:33.560 "superblock": true, 00:23:33.560 "num_base_bdevs": 4, 00:23:33.560 "num_base_bdevs_discovered": 3, 00:23:33.560 "num_base_bdevs_operational": 3, 00:23:33.560 "base_bdevs_list": [ 00:23:33.560 { 00:23:33.560 "name": "spare", 00:23:33.560 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:33.560 "is_configured": true, 00:23:33.560 "data_offset": 2048, 00:23:33.560 "data_size": 63488 00:23:33.560 }, 00:23:33.560 { 00:23:33.560 "name": null, 00:23:33.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.561 "is_configured": false, 00:23:33.561 "data_offset": 2048, 00:23:33.561 "data_size": 63488 00:23:33.561 }, 00:23:33.561 { 00:23:33.561 "name": "BaseBdev3", 00:23:33.561 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:33.561 "is_configured": true, 00:23:33.561 "data_offset": 2048, 00:23:33.561 "data_size": 63488 00:23:33.561 }, 00:23:33.561 { 00:23:33.561 "name": "BaseBdev4", 00:23:33.561 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:33.561 "is_configured": true, 00:23:33.561 "data_offset": 2048, 00:23:33.561 "data_size": 63488 00:23:33.561 } 00:23:33.561 ] 00:23:33.561 }' 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.561 22:31:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.819 22:31:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.819 "name": "raid_bdev1", 00:23:33.819 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:33.819 "strip_size_kb": 0, 00:23:33.819 "state": "online", 00:23:33.819 "raid_level": "raid1", 00:23:33.819 "superblock": true, 00:23:33.819 "num_base_bdevs": 4, 00:23:33.819 "num_base_bdevs_discovered": 3, 00:23:33.819 "num_base_bdevs_operational": 3, 00:23:33.819 "base_bdevs_list": [ 00:23:33.819 { 00:23:33.819 "name": "spare", 00:23:33.819 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:33.819 "is_configured": true, 00:23:33.819 "data_offset": 2048, 00:23:33.819 "data_size": 63488 00:23:33.819 }, 00:23:33.819 { 00:23:33.819 "name": null, 00:23:33.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.819 "is_configured": false, 00:23:33.819 "data_offset": 2048, 00:23:33.819 "data_size": 63488 00:23:33.819 }, 00:23:33.819 { 00:23:33.819 "name": "BaseBdev3", 00:23:33.819 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:33.819 "is_configured": true, 00:23:33.819 "data_offset": 2048, 00:23:33.819 "data_size": 63488 00:23:33.819 }, 00:23:33.819 { 00:23:33.819 "name": "BaseBdev4", 00:23:33.819 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:33.819 "is_configured": true, 00:23:33.819 "data_offset": 2048, 00:23:33.819 "data_size": 63488 00:23:33.819 } 00:23:33.819 ] 00:23:33.819 }' 00:23:33.819 22:31:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.819 22:31:32 -- common/autotest_common.sh@10 -- # set +x 00:23:34.408 22:31:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:34.676 [2024-07-14 22:31:33.648835] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:34.676 [2024-07-14 22:31:33.648878] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:34.676 00:23:34.676 Latency(us) 00:23:34.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.676 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:34.676 raid_bdev1 : 11.47 114.63 343.89 0.00 0.00 12190.65 309.06 109623.85 00:23:34.676 =================================================================================================================== 00:23:34.676 Total : 114.63 343.89 0.00 0.00 12190.65 309.06 109623.85 00:23:34.676 [2024-07-14 22:31:33.727738] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.676 [2024-07-14 22:31:33.727795] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.676 [2024-07-14 22:31:33.727915] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:34.676 [2024-07-14 22:31:33.727930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:34.676 0 00:23:34.932 22:31:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.932 22:31:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:34.932 22:31:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:34.932 22:31:33 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:34.932 22:31:33 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@12 -- # local i 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:34.932 22:31:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:35.190 /dev/nbd0 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:35.448 22:31:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:35.448 22:31:34 -- common/autotest_common.sh@857 -- # local i 00:23:35.448 22:31:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:35.448 22:31:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:35.448 22:31:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:35.448 22:31:34 -- common/autotest_common.sh@861 -- # break 00:23:35.448 22:31:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:35.448 22:31:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:35.448 22:31:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.448 1+0 records in 00:23:35.448 1+0 records out 00:23:35.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030245 s, 13.5 MB/s 00:23:35.448 22:31:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.448 22:31:34 -- common/autotest_common.sh@874 -- # size=4096 00:23:35.448 22:31:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.448 22:31:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:35.448 22:31:34 -- common/autotest_common.sh@877 -- # return 0 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.448 22:31:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:35.448 22:31:34 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:35.448 22:31:34 -- bdev/bdev_raid.sh@678 -- # continue 00:23:35.448 22:31:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:35.448 22:31:34 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:35.448 22:31:34 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@12 -- # local i 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.448 22:31:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:35.448 /dev/nbd1 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:35.706 22:31:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:35.706 22:31:34 -- common/autotest_common.sh@857 -- # local i 00:23:35.706 22:31:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:35.706 22:31:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:35.706 22:31:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:35.706 22:31:34 -- common/autotest_common.sh@861 -- # break 00:23:35.706 22:31:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:35.706 22:31:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:35.706 22:31:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.706 1+0 records in 00:23:35.706 1+0 records out 00:23:35.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237817 s, 17.2 MB/s 00:23:35.706 22:31:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.706 22:31:34 -- common/autotest_common.sh@874 -- # size=4096 00:23:35.706 22:31:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.706 22:31:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:35.706 22:31:34 -- common/autotest_common.sh@877 -- # return 0 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.706 22:31:34 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:35.706 22:31:34 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@51 -- # local i 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.706 22:31:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:35.964 22:31:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@41 -- # break 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.222 22:31:35 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:36.222 22:31:35 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:36.222 22:31:35 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@12 -- # local i 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:36.222 /dev/nbd1 00:23:36.222 22:31:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:36.481 22:31:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:36.481 22:31:35 -- common/autotest_common.sh@857 -- # local i 00:23:36.481 22:31:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:36.481 22:31:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:36.481 22:31:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:36.481 22:31:35 -- common/autotest_common.sh@861 -- # break 00:23:36.481 22:31:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:36.481 22:31:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:36.481 22:31:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.481 1+0 records in 00:23:36.481 1+0 records out 00:23:36.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221525 s, 18.5 MB/s 00:23:36.481 22:31:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.481 22:31:35 -- common/autotest_common.sh@874 -- # size=4096 00:23:36.481 22:31:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.481 22:31:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:36.481 22:31:35 -- common/autotest_common.sh@877 -- # return 0 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.481 22:31:35 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:36.481 22:31:35 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@51 -- # local i 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.481 22:31:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@41 -- # break 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.739 22:31:35 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@51 -- # local i 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.739 22:31:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:36.998 22:31:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.998 22:31:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.998 22:31:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.998 22:31:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.998 22:31:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.999 22:31:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.999 22:31:36 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:37.257 22:31:36 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:37.257 22:31:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.257 22:31:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.257 22:31:36 -- bdev/nbd_common.sh@41 -- # break 00:23:37.257 22:31:36 -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.257 22:31:36 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:37.257 22:31:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:37.257 22:31:36 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:37.257 22:31:36 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:37.516 [2024-07-14 22:31:36.547086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:37.516 [2024-07-14 22:31:36.547161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.516 [2024-07-14 22:31:36.547203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:37.516 [2024-07-14 22:31:36.547226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.516 [2024-07-14 22:31:36.549163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.516 [2024-07-14 22:31:36.549225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:37.516 [2024-07-14 22:31:36.549321] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:37.516 [2024-07-14 22:31:36.549381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:37.516 BaseBdev1 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@696 -- # continue 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:37.516 22:31:36 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:37.775 22:31:36 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:38.034 [2024-07-14 22:31:36.931199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:38.034 [2024-07-14 22:31:36.931250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.034 [2024-07-14 22:31:36.931283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:38.034 [2024-07-14 22:31:36.931303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.034 [2024-07-14 22:31:36.931641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.034 [2024-07-14 22:31:36.931692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:38.034 [2024-07-14 22:31:36.931791] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:38.034 [2024-07-14 22:31:36.931807] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:38.034 [2024-07-14 22:31:36.931814] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:38.034 [2024-07-14 22:31:36.931831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:23:38.034 [2024-07-14 22:31:36.931900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:38.034 BaseBdev3 00:23:38.034 22:31:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:38.034 22:31:36 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:38.034 22:31:36 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:38.293 22:31:37 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:38.293 [2024-07-14 22:31:37.347316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:38.293 [2024-07-14 22:31:37.347397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.293 [2024-07-14 22:31:37.347430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:38.293 [2024-07-14 22:31:37.347460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.293 [2024-07-14 22:31:37.347833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.293 [2024-07-14 22:31:37.347883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:38.293 [2024-07-14 22:31:37.347962] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:38.293 [2024-07-14 22:31:37.347987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:38.293 BaseBdev4 00:23:38.293 22:31:37 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:38.552 22:31:37 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:38.811 [2024-07-14 22:31:37.727438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:38.811 [2024-07-14 22:31:37.727511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.811 [2024-07-14 22:31:37.727540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:23:38.811 [2024-07-14 22:31:37.727565] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.811 [2024-07-14 22:31:37.727970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.811 [2024-07-14 22:31:37.728024] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:38.811 [2024-07-14 22:31:37.728111] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:38.811 [2024-07-14 22:31:37.728145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.811 spare 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.811 22:31:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.811 [2024-07-14 22:31:37.828245] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:23:38.811 [2024-07-14 22:31:37.828271] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:38.811 [2024-07-14 22:31:37.828382] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a220 00:23:38.811 [2024-07-14 22:31:37.828755] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:23:38.811 [2024-07-14 22:31:37.828783] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:23:38.811 [2024-07-14 22:31:37.828936] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.070 22:31:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.070 "name": "raid_bdev1", 00:23:39.070 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:39.070 "strip_size_kb": 0, 00:23:39.070 "state": "online", 00:23:39.070 "raid_level": "raid1", 00:23:39.070 "superblock": true, 00:23:39.070 "num_base_bdevs": 4, 00:23:39.070 "num_base_bdevs_discovered": 3, 00:23:39.070 "num_base_bdevs_operational": 3, 00:23:39.070 "base_bdevs_list": [ 00:23:39.070 { 00:23:39.070 "name": "spare", 00:23:39.070 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:39.070 "is_configured": true, 00:23:39.070 "data_offset": 2048, 00:23:39.070 "data_size": 63488 00:23:39.070 }, 00:23:39.070 { 00:23:39.071 "name": null, 00:23:39.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.071 "is_configured": false, 00:23:39.071 "data_offset": 2048, 00:23:39.071 "data_size": 63488 00:23:39.071 }, 00:23:39.071 { 00:23:39.071 "name": "BaseBdev3", 00:23:39.071 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:39.071 "is_configured": true, 00:23:39.071 "data_offset": 2048, 00:23:39.071 "data_size": 63488 00:23:39.071 }, 00:23:39.071 { 00:23:39.071 "name": "BaseBdev4", 00:23:39.071 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:39.071 "is_configured": true, 00:23:39.071 "data_offset": 2048, 00:23:39.071 "data_size": 63488 00:23:39.071 } 00:23:39.071 ] 00:23:39.071 }' 00:23:39.071 22:31:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.071 22:31:37 -- common/autotest_common.sh@10 -- # set +x 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.638 22:31:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.897 22:31:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.897 "name": "raid_bdev1", 00:23:39.897 "uuid": "c0e3ded7-eb81-4f12-9a40-2ca98b86001a", 00:23:39.897 "strip_size_kb": 0, 00:23:39.897 "state": "online", 00:23:39.897 "raid_level": "raid1", 00:23:39.897 "superblock": true, 00:23:39.897 "num_base_bdevs": 4, 00:23:39.897 "num_base_bdevs_discovered": 3, 00:23:39.897 "num_base_bdevs_operational": 3, 00:23:39.897 "base_bdevs_list": [ 00:23:39.897 { 00:23:39.897 "name": "spare", 00:23:39.897 "uuid": "6ef6e089-af2c-542f-b13f-9b233b1cd81c", 00:23:39.897 "is_configured": true, 00:23:39.897 "data_offset": 2048, 00:23:39.897 "data_size": 63488 00:23:39.897 }, 00:23:39.897 { 00:23:39.897 "name": null, 00:23:39.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.897 "is_configured": false, 00:23:39.897 "data_offset": 2048, 00:23:39.897 "data_size": 63488 00:23:39.897 }, 00:23:39.897 { 00:23:39.897 "name": "BaseBdev3", 00:23:39.897 "uuid": "8e6dfd38-1bf7-5238-9f35-20fee19232d4", 00:23:39.897 "is_configured": true, 00:23:39.897 "data_offset": 2048, 00:23:39.897 "data_size": 63488 00:23:39.897 }, 00:23:39.897 { 00:23:39.897 "name": "BaseBdev4", 00:23:39.897 "uuid": "3589135d-d9a0-5264-a991-aabb0b4896dd", 00:23:39.897 "is_configured": true, 00:23:39.897 "data_offset": 2048, 00:23:39.897 "data_size": 63488 00:23:39.897 } 00:23:39.897 ] 00:23:39.897 }' 00:23:39.897 22:31:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.897 22:31:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:39.897 22:31:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.897 22:31:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:40.156 22:31:38 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.156 22:31:38 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:40.156 22:31:39 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.156 22:31:39 -- bdev/bdev_raid.sh@709 -- # killprocess 129820 00:23:40.156 22:31:39 -- common/autotest_common.sh@926 -- # '[' -z 129820 ']' 00:23:40.156 22:31:39 -- common/autotest_common.sh@930 -- # kill -0 129820 00:23:40.156 22:31:39 -- common/autotest_common.sh@931 -- # uname 00:23:40.156 22:31:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:40.156 22:31:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129820 00:23:40.156 22:31:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:40.156 22:31:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:40.156 22:31:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129820' 00:23:40.156 killing process with pid 129820 00:23:40.156 22:31:39 -- common/autotest_common.sh@945 -- # kill 129820 00:23:40.156 Received shutdown signal, test time was about 16.952194 seconds 00:23:40.156 00:23:40.156 Latency(us) 00:23:40.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.157 =================================================================================================================== 00:23:40.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.157 [2024-07-14 22:31:39.193295] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.157 [2024-07-14 22:31:39.193358] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.157 22:31:39 -- common/autotest_common.sh@950 -- # wait 129820 00:23:40.157 [2024-07-14 22:31:39.193426] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.157 [2024-07-14 22:31:39.193440] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:23:40.416 [2024-07-14 22:31:39.467009] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:41.353 22:31:40 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:41.353 00:23:41.353 real 0m23.306s 00:23:41.353 user 0m37.661s 00:23:41.353 sys 0m2.728s 00:23:41.353 22:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.353 22:31:40 -- common/autotest_common.sh@10 -- # set +x 00:23:41.353 ************************************ 00:23:41.353 END TEST raid_rebuild_test_sb_io 00:23:41.353 ************************************ 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:23:41.611 22:31:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:41.611 22:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:41.611 22:31:40 -- common/autotest_common.sh@10 -- # set +x 00:23:41.611 ************************************ 00:23:41.611 START TEST raid5f_state_function_test 00:23:41.611 ************************************ 00:23:41.611 22:31:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=130473 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130473' 00:23:41.611 Process raid pid: 130473 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130473 /var/tmp/spdk-raid.sock 00:23:41.611 22:31:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:41.611 22:31:40 -- common/autotest_common.sh@819 -- # '[' -z 130473 ']' 00:23:41.611 22:31:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:41.611 22:31:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:41.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:41.611 22:31:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:41.611 22:31:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:41.611 22:31:40 -- common/autotest_common.sh@10 -- # set +x 00:23:41.611 [2024-07-14 22:31:40.549841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:41.611 [2024-07-14 22:31:40.550028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.869 [2024-07-14 22:31:40.715626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.869 [2024-07-14 22:31:40.871411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.127 [2024-07-14 22:31:41.036615] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:42.695 22:31:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:42.695 22:31:41 -- common/autotest_common.sh@852 -- # return 0 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:42.695 [2024-07-14 22:31:41.662614] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:42.695 [2024-07-14 22:31:41.662682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:42.695 [2024-07-14 22:31:41.662695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:42.695 [2024-07-14 22:31:41.662716] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:42.695 [2024-07-14 22:31:41.662724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:42.695 [2024-07-14 22:31:41.662763] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.695 22:31:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.953 22:31:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.953 "name": "Existed_Raid", 00:23:42.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.953 "strip_size_kb": 64, 00:23:42.953 "state": "configuring", 00:23:42.953 "raid_level": "raid5f", 00:23:42.953 "superblock": false, 00:23:42.953 "num_base_bdevs": 3, 00:23:42.953 "num_base_bdevs_discovered": 0, 00:23:42.953 "num_base_bdevs_operational": 3, 00:23:42.953 "base_bdevs_list": [ 00:23:42.953 { 00:23:42.953 "name": "BaseBdev1", 00:23:42.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.954 "is_configured": false, 00:23:42.954 "data_offset": 0, 00:23:42.954 "data_size": 0 00:23:42.954 }, 00:23:42.954 { 00:23:42.954 "name": "BaseBdev2", 00:23:42.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.954 "is_configured": false, 00:23:42.954 "data_offset": 0, 00:23:42.954 "data_size": 0 00:23:42.954 }, 00:23:42.954 { 00:23:42.954 "name": "BaseBdev3", 00:23:42.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.954 "is_configured": false, 00:23:42.954 "data_offset": 0, 00:23:42.954 "data_size": 0 00:23:42.954 } 00:23:42.954 ] 00:23:42.954 }' 00:23:42.954 22:31:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.954 22:31:41 -- common/autotest_common.sh@10 -- # set +x 00:23:43.522 22:31:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:43.780 [2024-07-14 22:31:42.794696] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:43.780 [2024-07-14 22:31:42.794728] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:43.780 22:31:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:44.039 [2024-07-14 22:31:43.002746] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:44.039 [2024-07-14 22:31:43.002810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:44.039 [2024-07-14 22:31:43.002823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:44.039 [2024-07-14 22:31:43.002846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:44.039 [2024-07-14 22:31:43.002854] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:44.039 [2024-07-14 22:31:43.002885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:44.039 22:31:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:44.298 [2024-07-14 22:31:43.224206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:44.298 BaseBdev1 00:23:44.298 22:31:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:44.298 22:31:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:44.298 22:31:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:44.298 22:31:43 -- common/autotest_common.sh@889 -- # local i 00:23:44.298 22:31:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:44.298 22:31:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:44.298 22:31:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.556 22:31:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:44.814 [ 00:23:44.814 { 00:23:44.814 "name": "BaseBdev1", 00:23:44.814 "aliases": [ 00:23:44.814 "3dc6412e-a541-4685-ad42-6a07588a10cd" 00:23:44.814 ], 00:23:44.814 "product_name": "Malloc disk", 00:23:44.814 "block_size": 512, 00:23:44.814 "num_blocks": 65536, 00:23:44.814 "uuid": "3dc6412e-a541-4685-ad42-6a07588a10cd", 00:23:44.814 "assigned_rate_limits": { 00:23:44.814 "rw_ios_per_sec": 0, 00:23:44.814 "rw_mbytes_per_sec": 0, 00:23:44.814 "r_mbytes_per_sec": 0, 00:23:44.814 "w_mbytes_per_sec": 0 00:23:44.814 }, 00:23:44.814 "claimed": true, 00:23:44.814 "claim_type": "exclusive_write", 00:23:44.814 "zoned": false, 00:23:44.814 "supported_io_types": { 00:23:44.814 "read": true, 00:23:44.814 "write": true, 00:23:44.814 "unmap": true, 00:23:44.814 "write_zeroes": true, 00:23:44.814 "flush": true, 00:23:44.814 "reset": true, 00:23:44.814 "compare": false, 00:23:44.814 "compare_and_write": false, 00:23:44.814 "abort": true, 00:23:44.814 "nvme_admin": false, 00:23:44.814 "nvme_io": false 00:23:44.814 }, 00:23:44.814 "memory_domains": [ 00:23:44.814 { 00:23:44.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.814 "dma_device_type": 2 00:23:44.814 } 00:23:44.814 ], 00:23:44.814 "driver_specific": {} 00:23:44.814 } 00:23:44.814 ] 00:23:44.814 22:31:43 -- common/autotest_common.sh@895 -- # return 0 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.814 22:31:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.072 22:31:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.072 "name": "Existed_Raid", 00:23:45.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.072 "strip_size_kb": 64, 00:23:45.072 "state": "configuring", 00:23:45.072 "raid_level": "raid5f", 00:23:45.072 "superblock": false, 00:23:45.072 "num_base_bdevs": 3, 00:23:45.072 "num_base_bdevs_discovered": 1, 00:23:45.072 "num_base_bdevs_operational": 3, 00:23:45.072 "base_bdevs_list": [ 00:23:45.072 { 00:23:45.072 "name": "BaseBdev1", 00:23:45.072 "uuid": "3dc6412e-a541-4685-ad42-6a07588a10cd", 00:23:45.072 "is_configured": true, 00:23:45.072 "data_offset": 0, 00:23:45.072 "data_size": 65536 00:23:45.072 }, 00:23:45.072 { 00:23:45.072 "name": "BaseBdev2", 00:23:45.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.072 "is_configured": false, 00:23:45.072 "data_offset": 0, 00:23:45.072 "data_size": 0 00:23:45.072 }, 00:23:45.072 { 00:23:45.072 "name": "BaseBdev3", 00:23:45.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.072 "is_configured": false, 00:23:45.072 "data_offset": 0, 00:23:45.072 "data_size": 0 00:23:45.072 } 00:23:45.072 ] 00:23:45.072 }' 00:23:45.072 22:31:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.072 22:31:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.662 22:31:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:45.922 [2024-07-14 22:31:44.908535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:45.922 [2024-07-14 22:31:44.908577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:45.922 22:31:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:45.922 22:31:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:46.180 [2024-07-14 22:31:45.108608] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:46.180 [2024-07-14 22:31:45.110356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:46.180 [2024-07-14 22:31:45.110413] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:46.180 [2024-07-14 22:31:45.110427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:46.180 [2024-07-14 22:31:45.110453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:46.180 22:31:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:46.180 22:31:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.181 22:31:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.439 22:31:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:46.439 "name": "Existed_Raid", 00:23:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.439 "strip_size_kb": 64, 00:23:46.439 "state": "configuring", 00:23:46.439 "raid_level": "raid5f", 00:23:46.439 "superblock": false, 00:23:46.439 "num_base_bdevs": 3, 00:23:46.439 "num_base_bdevs_discovered": 1, 00:23:46.439 "num_base_bdevs_operational": 3, 00:23:46.439 "base_bdevs_list": [ 00:23:46.439 { 00:23:46.439 "name": "BaseBdev1", 00:23:46.439 "uuid": "3dc6412e-a541-4685-ad42-6a07588a10cd", 00:23:46.439 "is_configured": true, 00:23:46.439 "data_offset": 0, 00:23:46.439 "data_size": 65536 00:23:46.439 }, 00:23:46.439 { 00:23:46.439 "name": "BaseBdev2", 00:23:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.439 "is_configured": false, 00:23:46.439 "data_offset": 0, 00:23:46.439 "data_size": 0 00:23:46.439 }, 00:23:46.439 { 00:23:46.439 "name": "BaseBdev3", 00:23:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.439 "is_configured": false, 00:23:46.439 "data_offset": 0, 00:23:46.439 "data_size": 0 00:23:46.439 } 00:23:46.439 ] 00:23:46.439 }' 00:23:46.439 22:31:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:46.439 22:31:45 -- common/autotest_common.sh@10 -- # set +x 00:23:47.005 22:31:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:47.263 [2024-07-14 22:31:46.236391] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.263 BaseBdev2 00:23:47.263 22:31:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:47.263 22:31:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:47.263 22:31:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:47.263 22:31:46 -- common/autotest_common.sh@889 -- # local i 00:23:47.263 22:31:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:47.263 22:31:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:47.263 22:31:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:47.520 22:31:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:47.776 [ 00:23:47.776 { 00:23:47.776 "name": "BaseBdev2", 00:23:47.776 "aliases": [ 00:23:47.776 "73c43ef7-1eac-4d31-bc23-dac151b846d4" 00:23:47.776 ], 00:23:47.776 "product_name": "Malloc disk", 00:23:47.776 "block_size": 512, 00:23:47.776 "num_blocks": 65536, 00:23:47.776 "uuid": "73c43ef7-1eac-4d31-bc23-dac151b846d4", 00:23:47.776 "assigned_rate_limits": { 00:23:47.776 "rw_ios_per_sec": 0, 00:23:47.776 "rw_mbytes_per_sec": 0, 00:23:47.776 "r_mbytes_per_sec": 0, 00:23:47.776 "w_mbytes_per_sec": 0 00:23:47.776 }, 00:23:47.776 "claimed": true, 00:23:47.776 "claim_type": "exclusive_write", 00:23:47.776 "zoned": false, 00:23:47.776 "supported_io_types": { 00:23:47.776 "read": true, 00:23:47.776 "write": true, 00:23:47.776 "unmap": true, 00:23:47.776 "write_zeroes": true, 00:23:47.776 "flush": true, 00:23:47.776 "reset": true, 00:23:47.776 "compare": false, 00:23:47.776 "compare_and_write": false, 00:23:47.776 "abort": true, 00:23:47.776 "nvme_admin": false, 00:23:47.776 "nvme_io": false 00:23:47.777 }, 00:23:47.777 "memory_domains": [ 00:23:47.777 { 00:23:47.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.777 "dma_device_type": 2 00:23:47.777 } 00:23:47.777 ], 00:23:47.777 "driver_specific": {} 00:23:47.777 } 00:23:47.777 ] 00:23:47.777 22:31:46 -- common/autotest_common.sh@895 -- # return 0 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.777 22:31:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.035 22:31:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:48.035 "name": "Existed_Raid", 00:23:48.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.035 "strip_size_kb": 64, 00:23:48.035 "state": "configuring", 00:23:48.035 "raid_level": "raid5f", 00:23:48.035 "superblock": false, 00:23:48.035 "num_base_bdevs": 3, 00:23:48.035 "num_base_bdevs_discovered": 2, 00:23:48.035 "num_base_bdevs_operational": 3, 00:23:48.035 "base_bdevs_list": [ 00:23:48.035 { 00:23:48.035 "name": "BaseBdev1", 00:23:48.035 "uuid": "3dc6412e-a541-4685-ad42-6a07588a10cd", 00:23:48.035 "is_configured": true, 00:23:48.035 "data_offset": 0, 00:23:48.035 "data_size": 65536 00:23:48.035 }, 00:23:48.035 { 00:23:48.035 "name": "BaseBdev2", 00:23:48.035 "uuid": "73c43ef7-1eac-4d31-bc23-dac151b846d4", 00:23:48.035 "is_configured": true, 00:23:48.035 "data_offset": 0, 00:23:48.035 "data_size": 65536 00:23:48.035 }, 00:23:48.035 { 00:23:48.035 "name": "BaseBdev3", 00:23:48.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.035 "is_configured": false, 00:23:48.035 "data_offset": 0, 00:23:48.035 "data_size": 0 00:23:48.035 } 00:23:48.035 ] 00:23:48.035 }' 00:23:48.035 22:31:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:48.035 22:31:46 -- common/autotest_common.sh@10 -- # set +x 00:23:48.600 22:31:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:48.858 [2024-07-14 22:31:47.831373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:48.858 [2024-07-14 22:31:47.831462] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:48.858 [2024-07-14 22:31:47.831475] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:48.858 [2024-07-14 22:31:47.831585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:23:48.858 [2024-07-14 22:31:47.836168] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:48.858 [2024-07-14 22:31:47.836191] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:48.858 [2024-07-14 22:31:47.836513] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.858 BaseBdev3 00:23:48.858 22:31:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:48.858 22:31:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:48.858 22:31:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:48.858 22:31:47 -- common/autotest_common.sh@889 -- # local i 00:23:48.858 22:31:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:48.858 22:31:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:48.858 22:31:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:49.116 22:31:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:49.374 [ 00:23:49.374 { 00:23:49.374 "name": "BaseBdev3", 00:23:49.374 "aliases": [ 00:23:49.374 "3bd254c1-880a-41f5-b9ea-30274d93b07f" 00:23:49.374 ], 00:23:49.374 "product_name": "Malloc disk", 00:23:49.374 "block_size": 512, 00:23:49.374 "num_blocks": 65536, 00:23:49.374 "uuid": "3bd254c1-880a-41f5-b9ea-30274d93b07f", 00:23:49.374 "assigned_rate_limits": { 00:23:49.374 "rw_ios_per_sec": 0, 00:23:49.374 "rw_mbytes_per_sec": 0, 00:23:49.374 "r_mbytes_per_sec": 0, 00:23:49.374 "w_mbytes_per_sec": 0 00:23:49.374 }, 00:23:49.374 "claimed": true, 00:23:49.374 "claim_type": "exclusive_write", 00:23:49.374 "zoned": false, 00:23:49.374 "supported_io_types": { 00:23:49.374 "read": true, 00:23:49.374 "write": true, 00:23:49.374 "unmap": true, 00:23:49.374 "write_zeroes": true, 00:23:49.374 "flush": true, 00:23:49.374 "reset": true, 00:23:49.374 "compare": false, 00:23:49.374 "compare_and_write": false, 00:23:49.374 "abort": true, 00:23:49.374 "nvme_admin": false, 00:23:49.374 "nvme_io": false 00:23:49.375 }, 00:23:49.375 "memory_domains": [ 00:23:49.375 { 00:23:49.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.375 "dma_device_type": 2 00:23:49.375 } 00:23:49.375 ], 00:23:49.375 "driver_specific": {} 00:23:49.375 } 00:23:49.375 ] 00:23:49.375 22:31:48 -- common/autotest_common.sh@895 -- # return 0 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.375 22:31:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.633 22:31:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.633 "name": "Existed_Raid", 00:23:49.633 "uuid": "b3ee0f46-5d4a-4a08-b576-83bb0e82285d", 00:23:49.633 "strip_size_kb": 64, 00:23:49.633 "state": "online", 00:23:49.633 "raid_level": "raid5f", 00:23:49.633 "superblock": false, 00:23:49.633 "num_base_bdevs": 3, 00:23:49.633 "num_base_bdevs_discovered": 3, 00:23:49.633 "num_base_bdevs_operational": 3, 00:23:49.633 "base_bdevs_list": [ 00:23:49.633 { 00:23:49.633 "name": "BaseBdev1", 00:23:49.633 "uuid": "3dc6412e-a541-4685-ad42-6a07588a10cd", 00:23:49.633 "is_configured": true, 00:23:49.633 "data_offset": 0, 00:23:49.633 "data_size": 65536 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "name": "BaseBdev2", 00:23:49.633 "uuid": "73c43ef7-1eac-4d31-bc23-dac151b846d4", 00:23:49.633 "is_configured": true, 00:23:49.633 "data_offset": 0, 00:23:49.633 "data_size": 65536 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "name": "BaseBdev3", 00:23:49.633 "uuid": "3bd254c1-880a-41f5-b9ea-30274d93b07f", 00:23:49.633 "is_configured": true, 00:23:49.633 "data_offset": 0, 00:23:49.633 "data_size": 65536 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }' 00:23:49.633 22:31:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.633 22:31:48 -- common/autotest_common.sh@10 -- # set +x 00:23:50.199 22:31:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:50.457 [2024-07-14 22:31:49.430452] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.457 22:31:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.715 22:31:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.715 "name": "Existed_Raid", 00:23:50.715 "uuid": "b3ee0f46-5d4a-4a08-b576-83bb0e82285d", 00:23:50.715 "strip_size_kb": 64, 00:23:50.715 "state": "online", 00:23:50.715 "raid_level": "raid5f", 00:23:50.715 "superblock": false, 00:23:50.715 "num_base_bdevs": 3, 00:23:50.715 "num_base_bdevs_discovered": 2, 00:23:50.715 "num_base_bdevs_operational": 2, 00:23:50.715 "base_bdevs_list": [ 00:23:50.715 { 00:23:50.715 "name": null, 00:23:50.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.715 "is_configured": false, 00:23:50.715 "data_offset": 0, 00:23:50.715 "data_size": 65536 00:23:50.715 }, 00:23:50.715 { 00:23:50.715 "name": "BaseBdev2", 00:23:50.715 "uuid": "73c43ef7-1eac-4d31-bc23-dac151b846d4", 00:23:50.715 "is_configured": true, 00:23:50.715 "data_offset": 0, 00:23:50.715 "data_size": 65536 00:23:50.715 }, 00:23:50.715 { 00:23:50.715 "name": "BaseBdev3", 00:23:50.715 "uuid": "3bd254c1-880a-41f5-b9ea-30274d93b07f", 00:23:50.715 "is_configured": true, 00:23:50.715 "data_offset": 0, 00:23:50.715 "data_size": 65536 00:23:50.715 } 00:23:50.715 ] 00:23:50.715 }' 00:23:50.715 22:31:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.715 22:31:49 -- common/autotest_common.sh@10 -- # set +x 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:51.649 22:31:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:51.907 [2024-07-14 22:31:50.781748] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:51.907 [2024-07-14 22:31:50.781786] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.907 [2024-07-14 22:31:50.781863] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.907 22:31:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:51.907 22:31:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:51.907 22:31:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.907 22:31:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:52.165 22:31:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:52.165 22:31:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:52.165 22:31:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:52.424 [2024-07-14 22:31:51.290905] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:52.424 [2024-07-14 22:31:51.290980] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:52.424 22:31:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:52.424 22:31:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:52.424 22:31:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:52.424 22:31:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.683 22:31:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:52.683 22:31:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:52.683 22:31:51 -- bdev/bdev_raid.sh@287 -- # killprocess 130473 00:23:52.683 22:31:51 -- common/autotest_common.sh@926 -- # '[' -z 130473 ']' 00:23:52.683 22:31:51 -- common/autotest_common.sh@930 -- # kill -0 130473 00:23:52.683 22:31:51 -- common/autotest_common.sh@931 -- # uname 00:23:52.683 22:31:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:52.684 22:31:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130473 00:23:52.684 killing process with pid 130473 00:23:52.684 22:31:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:52.684 22:31:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:52.684 22:31:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130473' 00:23:52.684 22:31:51 -- common/autotest_common.sh@945 -- # kill 130473 00:23:52.684 22:31:51 -- common/autotest_common.sh@950 -- # wait 130473 00:23:52.684 [2024-07-14 22:31:51.642843] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:52.684 [2024-07-14 22:31:51.643002] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:54.061 ************************************ 00:23:54.061 END TEST raid5f_state_function_test 00:23:54.061 ************************************ 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:54.061 00:23:54.061 real 0m12.211s 00:23:54.061 user 0m21.564s 00:23:54.061 sys 0m1.460s 00:23:54.061 22:31:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.061 22:31:52 -- common/autotest_common.sh@10 -- # set +x 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:54.061 22:31:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:54.061 22:31:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:54.061 22:31:52 -- common/autotest_common.sh@10 -- # set +x 00:23:54.061 ************************************ 00:23:54.061 START TEST raid5f_state_function_test_sb 00:23:54.061 ************************************ 00:23:54.061 22:31:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=130861 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130861' 00:23:54.061 Process raid pid: 130861 00:23:54.061 22:31:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130861 /var/tmp/spdk-raid.sock 00:23:54.061 22:31:52 -- common/autotest_common.sh@819 -- # '[' -z 130861 ']' 00:23:54.061 22:31:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:54.061 22:31:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:54.061 22:31:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:54.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:54.061 22:31:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:54.061 22:31:52 -- common/autotest_common.sh@10 -- # set +x 00:23:54.061 [2024-07-14 22:31:52.820737] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:54.061 [2024-07-14 22:31:52.820939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.061 [2024-07-14 22:31:52.992589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.320 [2024-07-14 22:31:53.265083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.576 [2024-07-14 22:31:53.468918] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.834 22:31:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:54.834 22:31:53 -- common/autotest_common.sh@852 -- # return 0 00:23:54.834 22:31:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:55.092 [2024-07-14 22:31:53.980666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:55.092 [2024-07-14 22:31:53.980740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:55.092 [2024-07-14 22:31:53.980756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:55.092 [2024-07-14 22:31:53.980775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:55.092 [2024-07-14 22:31:53.980783] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:55.092 [2024-07-14 22:31:53.980823] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.092 22:31:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.350 22:31:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.350 "name": "Existed_Raid", 00:23:55.350 "uuid": "c32d5e7d-102e-466a-b245-3b61bad0693e", 00:23:55.350 "strip_size_kb": 64, 00:23:55.350 "state": "configuring", 00:23:55.350 "raid_level": "raid5f", 00:23:55.350 "superblock": true, 00:23:55.350 "num_base_bdevs": 3, 00:23:55.350 "num_base_bdevs_discovered": 0, 00:23:55.350 "num_base_bdevs_operational": 3, 00:23:55.350 "base_bdevs_list": [ 00:23:55.350 { 00:23:55.350 "name": "BaseBdev1", 00:23:55.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.350 "is_configured": false, 00:23:55.350 "data_offset": 0, 00:23:55.350 "data_size": 0 00:23:55.350 }, 00:23:55.350 { 00:23:55.350 "name": "BaseBdev2", 00:23:55.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.350 "is_configured": false, 00:23:55.350 "data_offset": 0, 00:23:55.350 "data_size": 0 00:23:55.350 }, 00:23:55.350 { 00:23:55.350 "name": "BaseBdev3", 00:23:55.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.350 "is_configured": false, 00:23:55.350 "data_offset": 0, 00:23:55.350 "data_size": 0 00:23:55.350 } 00:23:55.350 ] 00:23:55.350 }' 00:23:55.350 22:31:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.350 22:31:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.917 22:31:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:56.175 [2024-07-14 22:31:55.068686] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:56.175 [2024-07-14 22:31:55.068723] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:56.175 22:31:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:56.433 [2024-07-14 22:31:55.308769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:56.433 [2024-07-14 22:31:55.308823] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:56.433 [2024-07-14 22:31:55.308846] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:56.433 [2024-07-14 22:31:55.308864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:56.433 [2024-07-14 22:31:55.308872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:56.433 [2024-07-14 22:31:55.308924] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:56.433 22:31:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:56.692 [2024-07-14 22:31:55.573824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.692 BaseBdev1 00:23:56.692 22:31:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:56.692 22:31:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:56.692 22:31:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:56.692 22:31:55 -- common/autotest_common.sh@889 -- # local i 00:23:56.692 22:31:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:56.692 22:31:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:56.692 22:31:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:56.951 22:31:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:56.951 [ 00:23:56.951 { 00:23:56.951 "name": "BaseBdev1", 00:23:56.951 "aliases": [ 00:23:56.951 "830a0b76-55c7-4c86-8a5b-a6fedfcec874" 00:23:56.951 ], 00:23:56.951 "product_name": "Malloc disk", 00:23:56.951 "block_size": 512, 00:23:56.951 "num_blocks": 65536, 00:23:56.951 "uuid": "830a0b76-55c7-4c86-8a5b-a6fedfcec874", 00:23:56.951 "assigned_rate_limits": { 00:23:56.951 "rw_ios_per_sec": 0, 00:23:56.951 "rw_mbytes_per_sec": 0, 00:23:56.951 "r_mbytes_per_sec": 0, 00:23:56.951 "w_mbytes_per_sec": 0 00:23:56.951 }, 00:23:56.951 "claimed": true, 00:23:56.951 "claim_type": "exclusive_write", 00:23:56.951 "zoned": false, 00:23:56.951 "supported_io_types": { 00:23:56.951 "read": true, 00:23:56.951 "write": true, 00:23:56.951 "unmap": true, 00:23:56.951 "write_zeroes": true, 00:23:56.951 "flush": true, 00:23:56.951 "reset": true, 00:23:56.951 "compare": false, 00:23:56.951 "compare_and_write": false, 00:23:56.951 "abort": true, 00:23:56.951 "nvme_admin": false, 00:23:56.951 "nvme_io": false 00:23:56.951 }, 00:23:56.951 "memory_domains": [ 00:23:56.951 { 00:23:56.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.951 "dma_device_type": 2 00:23:56.951 } 00:23:56.951 ], 00:23:56.951 "driver_specific": {} 00:23:56.951 } 00:23:56.951 ] 00:23:56.951 22:31:55 -- common/autotest_common.sh@895 -- # return 0 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.951 22:31:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.209 22:31:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.209 "name": "Existed_Raid", 00:23:57.209 "uuid": "07226c02-055b-4d1b-9d65-e65cdb670786", 00:23:57.209 "strip_size_kb": 64, 00:23:57.209 "state": "configuring", 00:23:57.209 "raid_level": "raid5f", 00:23:57.209 "superblock": true, 00:23:57.209 "num_base_bdevs": 3, 00:23:57.209 "num_base_bdevs_discovered": 1, 00:23:57.209 "num_base_bdevs_operational": 3, 00:23:57.209 "base_bdevs_list": [ 00:23:57.209 { 00:23:57.209 "name": "BaseBdev1", 00:23:57.209 "uuid": "830a0b76-55c7-4c86-8a5b-a6fedfcec874", 00:23:57.209 "is_configured": true, 00:23:57.209 "data_offset": 2048, 00:23:57.209 "data_size": 63488 00:23:57.209 }, 00:23:57.209 { 00:23:57.209 "name": "BaseBdev2", 00:23:57.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.209 "is_configured": false, 00:23:57.209 "data_offset": 0, 00:23:57.209 "data_size": 0 00:23:57.209 }, 00:23:57.209 { 00:23:57.209 "name": "BaseBdev3", 00:23:57.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.210 "is_configured": false, 00:23:57.210 "data_offset": 0, 00:23:57.210 "data_size": 0 00:23:57.210 } 00:23:57.210 ] 00:23:57.210 }' 00:23:57.210 22:31:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.210 22:31:56 -- common/autotest_common.sh@10 -- # set +x 00:23:57.774 22:31:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:58.031 [2024-07-14 22:31:56.942062] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:58.031 [2024-07-14 22:31:56.942102] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:58.031 22:31:56 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:58.031 22:31:56 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:58.298 22:31:57 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:58.568 BaseBdev1 00:23:58.568 22:31:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:58.568 22:31:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:58.568 22:31:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:58.568 22:31:57 -- common/autotest_common.sh@889 -- # local i 00:23:58.568 22:31:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:58.568 22:31:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:58.568 22:31:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:58.826 22:31:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:58.826 [ 00:23:58.826 { 00:23:58.826 "name": "BaseBdev1", 00:23:58.826 "aliases": [ 00:23:58.826 "afbf180d-2484-4c88-9061-0e7c087be386" 00:23:58.826 ], 00:23:58.826 "product_name": "Malloc disk", 00:23:58.826 "block_size": 512, 00:23:58.826 "num_blocks": 65536, 00:23:58.826 "uuid": "afbf180d-2484-4c88-9061-0e7c087be386", 00:23:58.826 "assigned_rate_limits": { 00:23:58.826 "rw_ios_per_sec": 0, 00:23:58.826 "rw_mbytes_per_sec": 0, 00:23:58.826 "r_mbytes_per_sec": 0, 00:23:58.826 "w_mbytes_per_sec": 0 00:23:58.826 }, 00:23:58.826 "claimed": false, 00:23:58.826 "zoned": false, 00:23:58.826 "supported_io_types": { 00:23:58.826 "read": true, 00:23:58.826 "write": true, 00:23:58.826 "unmap": true, 00:23:58.826 "write_zeroes": true, 00:23:58.826 "flush": true, 00:23:58.826 "reset": true, 00:23:58.826 "compare": false, 00:23:58.826 "compare_and_write": false, 00:23:58.826 "abort": true, 00:23:58.826 "nvme_admin": false, 00:23:58.826 "nvme_io": false 00:23:58.826 }, 00:23:58.826 "memory_domains": [ 00:23:58.826 { 00:23:58.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.826 "dma_device_type": 2 00:23:58.826 } 00:23:58.826 ], 00:23:58.826 "driver_specific": {} 00:23:58.826 } 00:23:58.826 ] 00:23:58.826 22:31:57 -- common/autotest_common.sh@895 -- # return 0 00:23:58.826 22:31:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:59.083 [2024-07-14 22:31:57.988863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:59.083 [2024-07-14 22:31:57.990375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:59.083 [2024-07-14 22:31:57.990431] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:59.083 [2024-07-14 22:31:57.990444] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:59.083 [2024-07-14 22:31:57.990475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.083 22:31:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.083 22:31:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.341 22:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:59.341 "name": "Existed_Raid", 00:23:59.341 "uuid": "4e87dc52-4502-42c5-9243-c924d0bded15", 00:23:59.341 "strip_size_kb": 64, 00:23:59.341 "state": "configuring", 00:23:59.341 "raid_level": "raid5f", 00:23:59.341 "superblock": true, 00:23:59.341 "num_base_bdevs": 3, 00:23:59.341 "num_base_bdevs_discovered": 1, 00:23:59.341 "num_base_bdevs_operational": 3, 00:23:59.341 "base_bdevs_list": [ 00:23:59.341 { 00:23:59.341 "name": "BaseBdev1", 00:23:59.341 "uuid": "afbf180d-2484-4c88-9061-0e7c087be386", 00:23:59.341 "is_configured": true, 00:23:59.341 "data_offset": 2048, 00:23:59.341 "data_size": 63488 00:23:59.341 }, 00:23:59.341 { 00:23:59.341 "name": "BaseBdev2", 00:23:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.341 "is_configured": false, 00:23:59.341 "data_offset": 0, 00:23:59.341 "data_size": 0 00:23:59.341 }, 00:23:59.341 { 00:23:59.341 "name": "BaseBdev3", 00:23:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.341 "is_configured": false, 00:23:59.341 "data_offset": 0, 00:23:59.341 "data_size": 0 00:23:59.341 } 00:23:59.341 ] 00:23:59.341 }' 00:23:59.341 22:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:59.341 22:31:58 -- common/autotest_common.sh@10 -- # set +x 00:23:59.907 22:31:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:00.164 [2024-07-14 22:31:59.147000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:00.164 BaseBdev2 00:24:00.164 22:31:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:00.164 22:31:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:00.164 22:31:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:00.164 22:31:59 -- common/autotest_common.sh@889 -- # local i 00:24:00.164 22:31:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:00.164 22:31:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:00.164 22:31:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:00.422 22:31:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:00.681 [ 00:24:00.681 { 00:24:00.681 "name": "BaseBdev2", 00:24:00.681 "aliases": [ 00:24:00.681 "7ff9d24e-118d-46d1-b4b4-128d8a767e8f" 00:24:00.681 ], 00:24:00.681 "product_name": "Malloc disk", 00:24:00.681 "block_size": 512, 00:24:00.681 "num_blocks": 65536, 00:24:00.681 "uuid": "7ff9d24e-118d-46d1-b4b4-128d8a767e8f", 00:24:00.681 "assigned_rate_limits": { 00:24:00.681 "rw_ios_per_sec": 0, 00:24:00.681 "rw_mbytes_per_sec": 0, 00:24:00.681 "r_mbytes_per_sec": 0, 00:24:00.681 "w_mbytes_per_sec": 0 00:24:00.681 }, 00:24:00.681 "claimed": true, 00:24:00.681 "claim_type": "exclusive_write", 00:24:00.681 "zoned": false, 00:24:00.681 "supported_io_types": { 00:24:00.681 "read": true, 00:24:00.681 "write": true, 00:24:00.681 "unmap": true, 00:24:00.681 "write_zeroes": true, 00:24:00.681 "flush": true, 00:24:00.681 "reset": true, 00:24:00.681 "compare": false, 00:24:00.681 "compare_and_write": false, 00:24:00.681 "abort": true, 00:24:00.681 "nvme_admin": false, 00:24:00.681 "nvme_io": false 00:24:00.681 }, 00:24:00.681 "memory_domains": [ 00:24:00.681 { 00:24:00.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.681 "dma_device_type": 2 00:24:00.681 } 00:24:00.681 ], 00:24:00.681 "driver_specific": {} 00:24:00.681 } 00:24:00.681 ] 00:24:00.681 22:31:59 -- common/autotest_common.sh@895 -- # return 0 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.681 22:31:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.940 22:31:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.940 "name": "Existed_Raid", 00:24:00.940 "uuid": "4e87dc52-4502-42c5-9243-c924d0bded15", 00:24:00.940 "strip_size_kb": 64, 00:24:00.940 "state": "configuring", 00:24:00.940 "raid_level": "raid5f", 00:24:00.940 "superblock": true, 00:24:00.940 "num_base_bdevs": 3, 00:24:00.940 "num_base_bdevs_discovered": 2, 00:24:00.940 "num_base_bdevs_operational": 3, 00:24:00.940 "base_bdevs_list": [ 00:24:00.940 { 00:24:00.940 "name": "BaseBdev1", 00:24:00.940 "uuid": "afbf180d-2484-4c88-9061-0e7c087be386", 00:24:00.940 "is_configured": true, 00:24:00.940 "data_offset": 2048, 00:24:00.940 "data_size": 63488 00:24:00.940 }, 00:24:00.940 { 00:24:00.940 "name": "BaseBdev2", 00:24:00.940 "uuid": "7ff9d24e-118d-46d1-b4b4-128d8a767e8f", 00:24:00.940 "is_configured": true, 00:24:00.940 "data_offset": 2048, 00:24:00.940 "data_size": 63488 00:24:00.940 }, 00:24:00.940 { 00:24:00.940 "name": "BaseBdev3", 00:24:00.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.940 "is_configured": false, 00:24:00.940 "data_offset": 0, 00:24:00.940 "data_size": 0 00:24:00.940 } 00:24:00.940 ] 00:24:00.940 }' 00:24:00.940 22:31:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.940 22:31:59 -- common/autotest_common.sh@10 -- # set +x 00:24:01.506 22:32:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:01.765 [2024-07-14 22:32:00.747234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:01.765 [2024-07-14 22:32:00.747507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:24:01.765 [2024-07-14 22:32:00.747524] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:01.765 [2024-07-14 22:32:00.747646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:01.765 BaseBdev3 00:24:01.765 [2024-07-14 22:32:00.751992] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:24:01.765 [2024-07-14 22:32:00.752017] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:24:01.765 [2024-07-14 22:32:00.752176] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.765 22:32:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:01.765 22:32:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:01.765 22:32:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:01.765 22:32:00 -- common/autotest_common.sh@889 -- # local i 00:24:01.765 22:32:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:01.765 22:32:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:01.765 22:32:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.024 22:32:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:02.282 [ 00:24:02.282 { 00:24:02.282 "name": "BaseBdev3", 00:24:02.282 "aliases": [ 00:24:02.282 "db436062-eee9-4443-a4ba-301114e967f1" 00:24:02.282 ], 00:24:02.282 "product_name": "Malloc disk", 00:24:02.282 "block_size": 512, 00:24:02.282 "num_blocks": 65536, 00:24:02.282 "uuid": "db436062-eee9-4443-a4ba-301114e967f1", 00:24:02.282 "assigned_rate_limits": { 00:24:02.282 "rw_ios_per_sec": 0, 00:24:02.282 "rw_mbytes_per_sec": 0, 00:24:02.282 "r_mbytes_per_sec": 0, 00:24:02.282 "w_mbytes_per_sec": 0 00:24:02.282 }, 00:24:02.282 "claimed": true, 00:24:02.282 "claim_type": "exclusive_write", 00:24:02.282 "zoned": false, 00:24:02.282 "supported_io_types": { 00:24:02.282 "read": true, 00:24:02.283 "write": true, 00:24:02.283 "unmap": true, 00:24:02.283 "write_zeroes": true, 00:24:02.283 "flush": true, 00:24:02.283 "reset": true, 00:24:02.283 "compare": false, 00:24:02.283 "compare_and_write": false, 00:24:02.283 "abort": true, 00:24:02.283 "nvme_admin": false, 00:24:02.283 "nvme_io": false 00:24:02.283 }, 00:24:02.283 "memory_domains": [ 00:24:02.283 { 00:24:02.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.283 "dma_device_type": 2 00:24:02.283 } 00:24:02.283 ], 00:24:02.283 "driver_specific": {} 00:24:02.283 } 00:24:02.283 ] 00:24:02.283 22:32:01 -- common/autotest_common.sh@895 -- # return 0 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.283 22:32:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.541 22:32:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:02.541 "name": "Existed_Raid", 00:24:02.541 "uuid": "4e87dc52-4502-42c5-9243-c924d0bded15", 00:24:02.541 "strip_size_kb": 64, 00:24:02.541 "state": "online", 00:24:02.541 "raid_level": "raid5f", 00:24:02.541 "superblock": true, 00:24:02.541 "num_base_bdevs": 3, 00:24:02.541 "num_base_bdevs_discovered": 3, 00:24:02.541 "num_base_bdevs_operational": 3, 00:24:02.541 "base_bdevs_list": [ 00:24:02.541 { 00:24:02.541 "name": "BaseBdev1", 00:24:02.541 "uuid": "afbf180d-2484-4c88-9061-0e7c087be386", 00:24:02.541 "is_configured": true, 00:24:02.541 "data_offset": 2048, 00:24:02.541 "data_size": 63488 00:24:02.541 }, 00:24:02.541 { 00:24:02.541 "name": "BaseBdev2", 00:24:02.541 "uuid": "7ff9d24e-118d-46d1-b4b4-128d8a767e8f", 00:24:02.541 "is_configured": true, 00:24:02.541 "data_offset": 2048, 00:24:02.541 "data_size": 63488 00:24:02.541 }, 00:24:02.541 { 00:24:02.541 "name": "BaseBdev3", 00:24:02.541 "uuid": "db436062-eee9-4443-a4ba-301114e967f1", 00:24:02.541 "is_configured": true, 00:24:02.541 "data_offset": 2048, 00:24:02.541 "data_size": 63488 00:24:02.541 } 00:24:02.541 ] 00:24:02.541 }' 00:24:02.541 22:32:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:02.541 22:32:01 -- common/autotest_common.sh@10 -- # set +x 00:24:03.109 22:32:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:03.368 [2024-07-14 22:32:02.273094] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.368 22:32:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.627 22:32:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.627 "name": "Existed_Raid", 00:24:03.627 "uuid": "4e87dc52-4502-42c5-9243-c924d0bded15", 00:24:03.627 "strip_size_kb": 64, 00:24:03.627 "state": "online", 00:24:03.627 "raid_level": "raid5f", 00:24:03.627 "superblock": true, 00:24:03.627 "num_base_bdevs": 3, 00:24:03.627 "num_base_bdevs_discovered": 2, 00:24:03.627 "num_base_bdevs_operational": 2, 00:24:03.627 "base_bdevs_list": [ 00:24:03.627 { 00:24:03.627 "name": null, 00:24:03.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.627 "is_configured": false, 00:24:03.627 "data_offset": 2048, 00:24:03.627 "data_size": 63488 00:24:03.627 }, 00:24:03.627 { 00:24:03.627 "name": "BaseBdev2", 00:24:03.627 "uuid": "7ff9d24e-118d-46d1-b4b4-128d8a767e8f", 00:24:03.627 "is_configured": true, 00:24:03.627 "data_offset": 2048, 00:24:03.627 "data_size": 63488 00:24:03.627 }, 00:24:03.627 { 00:24:03.627 "name": "BaseBdev3", 00:24:03.627 "uuid": "db436062-eee9-4443-a4ba-301114e967f1", 00:24:03.627 "is_configured": true, 00:24:03.627 "data_offset": 2048, 00:24:03.627 "data_size": 63488 00:24:03.627 } 00:24:03.627 ] 00:24:03.627 }' 00:24:03.627 22:32:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.627 22:32:02 -- common/autotest_common.sh@10 -- # set +x 00:24:04.195 22:32:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:04.195 22:32:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:04.195 22:32:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.195 22:32:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:04.452 22:32:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:04.452 22:32:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.452 22:32:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:04.709 [2024-07-14 22:32:03.603333] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.709 [2024-07-14 22:32:03.603367] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:04.709 [2024-07-14 22:32:03.603428] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:04.709 22:32:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:04.709 22:32:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:04.709 22:32:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.709 22:32:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:04.967 22:32:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:04.967 22:32:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.967 22:32:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:05.225 [2024-07-14 22:32:04.111549] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:05.225 [2024-07-14 22:32:04.111628] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:24:05.225 22:32:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:05.225 22:32:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:05.225 22:32:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.225 22:32:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:05.482 22:32:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:05.482 22:32:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:05.482 22:32:04 -- bdev/bdev_raid.sh@287 -- # killprocess 130861 00:24:05.482 22:32:04 -- common/autotest_common.sh@926 -- # '[' -z 130861 ']' 00:24:05.482 22:32:04 -- common/autotest_common.sh@930 -- # kill -0 130861 00:24:05.482 22:32:04 -- common/autotest_common.sh@931 -- # uname 00:24:05.482 22:32:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:05.482 22:32:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130861 00:24:05.482 killing process with pid 130861 00:24:05.482 22:32:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:05.482 22:32:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:05.482 22:32:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130861' 00:24:05.482 22:32:04 -- common/autotest_common.sh@945 -- # kill 130861 00:24:05.482 22:32:04 -- common/autotest_common.sh@950 -- # wait 130861 00:24:05.482 [2024-07-14 22:32:04.412409] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:05.482 [2024-07-14 22:32:04.412507] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:06.417 ************************************ 00:24:06.417 END TEST raid5f_state_function_test_sb 00:24:06.417 ************************************ 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:06.417 00:24:06.417 real 0m12.581s 00:24:06.417 user 0m22.272s 00:24:06.417 sys 0m1.627s 00:24:06.417 22:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:06.417 22:32:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:24:06.417 22:32:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:06.417 22:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:06.417 22:32:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 ************************************ 00:24:06.417 START TEST raid5f_superblock_test 00:24:06.417 ************************************ 00:24:06.417 22:32:05 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@357 -- # raid_pid=131276 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131276 /var/tmp/spdk-raid.sock 00:24:06.417 22:32:05 -- common/autotest_common.sh@819 -- # '[' -z 131276 ']' 00:24:06.417 22:32:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:06.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:06.417 22:32:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:06.417 22:32:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:06.417 22:32:05 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:06.417 22:32:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:06.417 22:32:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 [2024-07-14 22:32:05.468381] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:06.417 [2024-07-14 22:32:05.468618] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131276 ] 00:24:06.676 [2024-07-14 22:32:05.640207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.934 [2024-07-14 22:32:05.850298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.193 [2024-07-14 22:32:06.015922] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:07.451 22:32:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:07.451 22:32:06 -- common/autotest_common.sh@852 -- # return 0 00:24:07.451 22:32:06 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:07.452 22:32:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:07.710 malloc1 00:24:07.710 22:32:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:07.968 [2024-07-14 22:32:06.792086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:07.968 [2024-07-14 22:32:06.792268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.968 [2024-07-14 22:32:06.792326] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:07.968 [2024-07-14 22:32:06.792447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.968 [2024-07-14 22:32:06.794564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.968 [2024-07-14 22:32:06.794717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:07.968 pt1 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:07.968 22:32:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:08.227 malloc2 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:08.227 [2024-07-14 22:32:07.236433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:08.227 [2024-07-14 22:32:07.236610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.227 [2024-07-14 22:32:07.236679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:08.227 [2024-07-14 22:32:07.236811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.227 [2024-07-14 22:32:07.238689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.227 [2024-07-14 22:32:07.238868] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:08.227 pt2 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:08.227 22:32:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:08.485 malloc3 00:24:08.485 22:32:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:08.744 [2024-07-14 22:32:07.736583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:08.744 [2024-07-14 22:32:07.736752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.744 [2024-07-14 22:32:07.736818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:08.744 [2024-07-14 22:32:07.736964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.744 [2024-07-14 22:32:07.738932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.744 [2024-07-14 22:32:07.739092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:08.744 pt3 00:24:08.744 22:32:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:08.744 22:32:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:08.744 22:32:07 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:24:09.002 [2024-07-14 22:32:07.944663] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:09.003 [2024-07-14 22:32:07.946499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:09.003 [2024-07-14 22:32:07.946675] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:09.003 [2024-07-14 22:32:07.946917] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:24:09.003 [2024-07-14 22:32:07.947022] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:09.003 [2024-07-14 22:32:07.947177] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:24:09.003 [2024-07-14 22:32:07.951386] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:24:09.003 [2024-07-14 22:32:07.951505] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:24:09.003 [2024-07-14 22:32:07.951742] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.003 22:32:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.261 22:32:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.261 "name": "raid_bdev1", 00:24:09.261 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:09.261 "strip_size_kb": 64, 00:24:09.261 "state": "online", 00:24:09.261 "raid_level": "raid5f", 00:24:09.261 "superblock": true, 00:24:09.261 "num_base_bdevs": 3, 00:24:09.261 "num_base_bdevs_discovered": 3, 00:24:09.261 "num_base_bdevs_operational": 3, 00:24:09.261 "base_bdevs_list": [ 00:24:09.261 { 00:24:09.261 "name": "pt1", 00:24:09.261 "uuid": "f2087616-1ae6-5b5a-91ac-cbeef483f1c5", 00:24:09.261 "is_configured": true, 00:24:09.261 "data_offset": 2048, 00:24:09.261 "data_size": 63488 00:24:09.261 }, 00:24:09.261 { 00:24:09.261 "name": "pt2", 00:24:09.261 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:09.261 "is_configured": true, 00:24:09.261 "data_offset": 2048, 00:24:09.261 "data_size": 63488 00:24:09.261 }, 00:24:09.261 { 00:24:09.261 "name": "pt3", 00:24:09.261 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:09.261 "is_configured": true, 00:24:09.261 "data_offset": 2048, 00:24:09.261 "data_size": 63488 00:24:09.261 } 00:24:09.261 ] 00:24:09.261 }' 00:24:09.261 22:32:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.261 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:24:09.828 22:32:08 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:09.828 22:32:08 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:10.096 [2024-07-14 22:32:09.037232] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:10.096 22:32:09 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8995e239-2920-45cb-82c7-d75336a910a2 00:24:10.096 22:32:09 -- bdev/bdev_raid.sh@380 -- # '[' -z 8995e239-2920-45cb-82c7-d75336a910a2 ']' 00:24:10.096 22:32:09 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:10.354 [2024-07-14 22:32:09.285156] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:10.354 [2024-07-14 22:32:09.285312] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:10.354 [2024-07-14 22:32:09.285490] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.354 [2024-07-14 22:32:09.285669] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.354 [2024-07-14 22:32:09.285797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:24:10.354 22:32:09 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.354 22:32:09 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:10.618 22:32:09 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:10.618 22:32:09 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:10.618 22:32:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:10.618 22:32:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:10.891 22:32:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:10.891 22:32:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:10.891 22:32:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:10.891 22:32:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:11.149 22:32:10 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:11.149 22:32:10 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:11.407 22:32:10 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:11.407 22:32:10 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:11.407 22:32:10 -- common/autotest_common.sh@640 -- # local es=0 00:24:11.407 22:32:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:11.407 22:32:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.407 22:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:11.407 22:32:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.407 22:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:11.407 22:32:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.407 22:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:11.407 22:32:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.407 22:32:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:11.407 22:32:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:11.665 [2024-07-14 22:32:10.537386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:11.665 [2024-07-14 22:32:10.539020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:11.665 [2024-07-14 22:32:10.539070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:11.665 [2024-07-14 22:32:10.539119] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:11.665 [2024-07-14 22:32:10.539172] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:11.665 [2024-07-14 22:32:10.539232] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:11.665 [2024-07-14 22:32:10.539289] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:11.665 [2024-07-14 22:32:10.539300] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:24:11.665 request: 00:24:11.665 { 00:24:11.665 "name": "raid_bdev1", 00:24:11.665 "raid_level": "raid5f", 00:24:11.665 "base_bdevs": [ 00:24:11.665 "malloc1", 00:24:11.665 "malloc2", 00:24:11.665 "malloc3" 00:24:11.665 ], 00:24:11.665 "superblock": false, 00:24:11.665 "strip_size_kb": 64, 00:24:11.665 "method": "bdev_raid_create", 00:24:11.665 "req_id": 1 00:24:11.665 } 00:24:11.665 Got JSON-RPC error response 00:24:11.665 response: 00:24:11.665 { 00:24:11.665 "code": -17, 00:24:11.665 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:11.665 } 00:24:11.665 22:32:10 -- common/autotest_common.sh@643 -- # es=1 00:24:11.665 22:32:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:11.665 22:32:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:11.665 22:32:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:11.665 22:32:10 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.665 22:32:10 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:11.922 [2024-07-14 22:32:10.949443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:11.922 [2024-07-14 22:32:10.949513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.922 [2024-07-14 22:32:10.949543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:11.922 [2024-07-14 22:32:10.949560] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.922 [2024-07-14 22:32:10.951401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.922 [2024-07-14 22:32:10.951442] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:11.922 [2024-07-14 22:32:10.951538] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:11.922 [2024-07-14 22:32:10.951587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:11.922 pt1 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.922 22:32:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.180 22:32:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.180 "name": "raid_bdev1", 00:24:12.180 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:12.180 "strip_size_kb": 64, 00:24:12.180 "state": "configuring", 00:24:12.180 "raid_level": "raid5f", 00:24:12.180 "superblock": true, 00:24:12.180 "num_base_bdevs": 3, 00:24:12.180 "num_base_bdevs_discovered": 1, 00:24:12.180 "num_base_bdevs_operational": 3, 00:24:12.180 "base_bdevs_list": [ 00:24:12.180 { 00:24:12.180 "name": "pt1", 00:24:12.180 "uuid": "f2087616-1ae6-5b5a-91ac-cbeef483f1c5", 00:24:12.180 "is_configured": true, 00:24:12.180 "data_offset": 2048, 00:24:12.180 "data_size": 63488 00:24:12.180 }, 00:24:12.180 { 00:24:12.180 "name": null, 00:24:12.180 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:12.180 "is_configured": false, 00:24:12.180 "data_offset": 2048, 00:24:12.180 "data_size": 63488 00:24:12.180 }, 00:24:12.180 { 00:24:12.180 "name": null, 00:24:12.180 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:12.180 "is_configured": false, 00:24:12.180 "data_offset": 2048, 00:24:12.180 "data_size": 63488 00:24:12.180 } 00:24:12.180 ] 00:24:12.180 }' 00:24:12.180 22:32:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.180 22:32:11 -- common/autotest_common.sh@10 -- # set +x 00:24:13.113 22:32:11 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:24:13.113 22:32:11 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:13.113 [2024-07-14 22:32:12.113714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:13.113 [2024-07-14 22:32:12.113793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.113 [2024-07-14 22:32:12.113838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:13.113 [2024-07-14 22:32:12.113854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.113 [2024-07-14 22:32:12.114235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.113 [2024-07-14 22:32:12.114261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:13.113 [2024-07-14 22:32:12.114355] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:13.113 [2024-07-14 22:32:12.114378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:13.113 pt2 00:24:13.113 22:32:12 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:13.371 [2024-07-14 22:32:12.293757] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.371 22:32:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.629 22:32:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.629 "name": "raid_bdev1", 00:24:13.629 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:13.629 "strip_size_kb": 64, 00:24:13.629 "state": "configuring", 00:24:13.629 "raid_level": "raid5f", 00:24:13.629 "superblock": true, 00:24:13.629 "num_base_bdevs": 3, 00:24:13.629 "num_base_bdevs_discovered": 1, 00:24:13.629 "num_base_bdevs_operational": 3, 00:24:13.629 "base_bdevs_list": [ 00:24:13.629 { 00:24:13.629 "name": "pt1", 00:24:13.629 "uuid": "f2087616-1ae6-5b5a-91ac-cbeef483f1c5", 00:24:13.629 "is_configured": true, 00:24:13.629 "data_offset": 2048, 00:24:13.629 "data_size": 63488 00:24:13.629 }, 00:24:13.629 { 00:24:13.629 "name": null, 00:24:13.629 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:13.629 "is_configured": false, 00:24:13.629 "data_offset": 2048, 00:24:13.629 "data_size": 63488 00:24:13.629 }, 00:24:13.629 { 00:24:13.629 "name": null, 00:24:13.629 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:13.629 "is_configured": false, 00:24:13.629 "data_offset": 2048, 00:24:13.629 "data_size": 63488 00:24:13.629 } 00:24:13.629 ] 00:24:13.629 }' 00:24:13.629 22:32:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.629 22:32:12 -- common/autotest_common.sh@10 -- # set +x 00:24:14.194 22:32:13 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:14.194 22:32:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:14.194 22:32:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:14.450 [2024-07-14 22:32:13.377910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:14.450 [2024-07-14 22:32:13.377972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.450 [2024-07-14 22:32:13.377999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:14.450 [2024-07-14 22:32:13.378021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.450 [2024-07-14 22:32:13.378715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.450 [2024-07-14 22:32:13.378777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:14.450 [2024-07-14 22:32:13.378884] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:14.450 [2024-07-14 22:32:13.378907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:14.450 pt2 00:24:14.450 22:32:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:14.450 22:32:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:14.450 22:32:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:14.790 [2024-07-14 22:32:13.629963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:14.790 [2024-07-14 22:32:13.630018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.790 [2024-07-14 22:32:13.630046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:14.790 [2024-07-14 22:32:13.630067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.790 [2024-07-14 22:32:13.630725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.790 [2024-07-14 22:32:13.630800] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:14.790 [2024-07-14 22:32:13.630888] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:14.790 [2024-07-14 22:32:13.630911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:14.790 [2024-07-14 22:32:13.631225] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:24:14.790 [2024-07-14 22:32:13.631247] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:14.790 [2024-07-14 22:32:13.631347] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:14.790 [2024-07-14 22:32:13.635461] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:24:14.790 [2024-07-14 22:32:13.635485] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:24:14.790 pt3 00:24:14.790 [2024-07-14 22:32:13.635813] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.790 "name": "raid_bdev1", 00:24:14.790 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:14.790 "strip_size_kb": 64, 00:24:14.790 "state": "online", 00:24:14.790 "raid_level": "raid5f", 00:24:14.790 "superblock": true, 00:24:14.790 "num_base_bdevs": 3, 00:24:14.790 "num_base_bdevs_discovered": 3, 00:24:14.790 "num_base_bdevs_operational": 3, 00:24:14.790 "base_bdevs_list": [ 00:24:14.790 { 00:24:14.790 "name": "pt1", 00:24:14.790 "uuid": "f2087616-1ae6-5b5a-91ac-cbeef483f1c5", 00:24:14.790 "is_configured": true, 00:24:14.790 "data_offset": 2048, 00:24:14.790 "data_size": 63488 00:24:14.790 }, 00:24:14.790 { 00:24:14.790 "name": "pt2", 00:24:14.790 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:14.790 "is_configured": true, 00:24:14.790 "data_offset": 2048, 00:24:14.790 "data_size": 63488 00:24:14.790 }, 00:24:14.790 { 00:24:14.790 "name": "pt3", 00:24:14.790 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:14.790 "is_configured": true, 00:24:14.790 "data_offset": 2048, 00:24:14.790 "data_size": 63488 00:24:14.790 } 00:24:14.790 ] 00:24:14.790 }' 00:24:14.790 22:32:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.790 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:15.724 [2024-07-14 22:32:14.721169] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@430 -- # '[' 8995e239-2920-45cb-82c7-d75336a910a2 '!=' 8995e239-2920-45cb-82c7-d75336a910a2 ']' 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:15.724 22:32:14 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:15.983 [2024-07-14 22:32:14.960854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.983 22:32:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.241 22:32:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.241 "name": "raid_bdev1", 00:24:16.241 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:16.241 "strip_size_kb": 64, 00:24:16.241 "state": "online", 00:24:16.241 "raid_level": "raid5f", 00:24:16.241 "superblock": true, 00:24:16.241 "num_base_bdevs": 3, 00:24:16.241 "num_base_bdevs_discovered": 2, 00:24:16.241 "num_base_bdevs_operational": 2, 00:24:16.241 "base_bdevs_list": [ 00:24:16.241 { 00:24:16.241 "name": null, 00:24:16.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.241 "is_configured": false, 00:24:16.241 "data_offset": 2048, 00:24:16.241 "data_size": 63488 00:24:16.241 }, 00:24:16.241 { 00:24:16.241 "name": "pt2", 00:24:16.241 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:16.241 "is_configured": true, 00:24:16.241 "data_offset": 2048, 00:24:16.241 "data_size": 63488 00:24:16.241 }, 00:24:16.241 { 00:24:16.241 "name": "pt3", 00:24:16.241 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:16.241 "is_configured": true, 00:24:16.241 "data_offset": 2048, 00:24:16.241 "data_size": 63488 00:24:16.241 } 00:24:16.241 ] 00:24:16.241 }' 00:24:16.241 22:32:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.241 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:24:17.177 22:32:15 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:17.177 [2024-07-14 22:32:16.053008] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:17.177 [2024-07-14 22:32:16.053045] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:17.177 [2024-07-14 22:32:16.053104] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:17.177 [2024-07-14 22:32:16.053165] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:17.177 [2024-07-14 22:32:16.053178] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:17.177 22:32:16 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.177 22:32:16 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:17.435 22:32:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:17.694 22:32:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:17.694 22:32:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:17.694 22:32:16 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:17.694 22:32:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:17.694 22:32:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:17.952 [2024-07-14 22:32:16.849162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:17.952 [2024-07-14 22:32:16.849241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.952 [2024-07-14 22:32:16.849284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:17.952 [2024-07-14 22:32:16.849310] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.952 [2024-07-14 22:32:16.851315] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.952 [2024-07-14 22:32:16.851363] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:17.952 [2024-07-14 22:32:16.851468] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:17.952 [2024-07-14 22:32:16.851539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:17.952 pt2 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.952 22:32:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.211 22:32:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.211 "name": "raid_bdev1", 00:24:18.211 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:18.211 "strip_size_kb": 64, 00:24:18.211 "state": "configuring", 00:24:18.211 "raid_level": "raid5f", 00:24:18.211 "superblock": true, 00:24:18.211 "num_base_bdevs": 3, 00:24:18.211 "num_base_bdevs_discovered": 1, 00:24:18.211 "num_base_bdevs_operational": 2, 00:24:18.211 "base_bdevs_list": [ 00:24:18.211 { 00:24:18.211 "name": null, 00:24:18.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.211 "is_configured": false, 00:24:18.211 "data_offset": 2048, 00:24:18.211 "data_size": 63488 00:24:18.211 }, 00:24:18.211 { 00:24:18.211 "name": "pt2", 00:24:18.211 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:18.211 "is_configured": true, 00:24:18.211 "data_offset": 2048, 00:24:18.211 "data_size": 63488 00:24:18.211 }, 00:24:18.211 { 00:24:18.211 "name": null, 00:24:18.211 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:18.211 "is_configured": false, 00:24:18.211 "data_offset": 2048, 00:24:18.211 "data_size": 63488 00:24:18.211 } 00:24:18.211 ] 00:24:18.211 }' 00:24:18.211 22:32:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.211 22:32:17 -- common/autotest_common.sh@10 -- # set +x 00:24:18.777 22:32:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:18.777 22:32:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:18.777 22:32:17 -- bdev/bdev_raid.sh@462 -- # i=2 00:24:18.777 22:32:17 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:19.035 [2024-07-14 22:32:17.961376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:19.035 [2024-07-14 22:32:17.961437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.035 [2024-07-14 22:32:17.961474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:19.035 [2024-07-14 22:32:17.961495] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.035 [2024-07-14 22:32:17.961977] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.035 [2024-07-14 22:32:17.962023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:19.035 [2024-07-14 22:32:17.962129] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:19.035 [2024-07-14 22:32:17.962158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:19.035 [2024-07-14 22:32:17.962271] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:24:19.035 [2024-07-14 22:32:17.962285] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:19.035 [2024-07-14 22:32:17.962372] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:19.035 [2024-07-14 22:32:17.966429] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:24:19.035 [2024-07-14 22:32:17.966459] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:24:19.035 pt3 00:24:19.035 [2024-07-14 22:32:17.966774] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.035 22:32:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.294 22:32:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.294 "name": "raid_bdev1", 00:24:19.294 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:19.294 "strip_size_kb": 64, 00:24:19.294 "state": "online", 00:24:19.294 "raid_level": "raid5f", 00:24:19.294 "superblock": true, 00:24:19.294 "num_base_bdevs": 3, 00:24:19.294 "num_base_bdevs_discovered": 2, 00:24:19.294 "num_base_bdevs_operational": 2, 00:24:19.294 "base_bdevs_list": [ 00:24:19.294 { 00:24:19.294 "name": null, 00:24:19.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.294 "is_configured": false, 00:24:19.294 "data_offset": 2048, 00:24:19.294 "data_size": 63488 00:24:19.294 }, 00:24:19.294 { 00:24:19.294 "name": "pt2", 00:24:19.294 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:19.294 "is_configured": true, 00:24:19.294 "data_offset": 2048, 00:24:19.294 "data_size": 63488 00:24:19.294 }, 00:24:19.294 { 00:24:19.294 "name": "pt3", 00:24:19.294 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:19.294 "is_configured": true, 00:24:19.294 "data_offset": 2048, 00:24:19.294 "data_size": 63488 00:24:19.294 } 00:24:19.294 ] 00:24:19.294 }' 00:24:19.294 22:32:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.294 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:24:19.860 22:32:18 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:24:19.860 22:32:18 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:20.119 [2024-07-14 22:32:19.071545] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:20.119 [2024-07-14 22:32:19.071574] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:20.119 [2024-07-14 22:32:19.071627] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:20.119 [2024-07-14 22:32:19.071683] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:20.119 [2024-07-14 22:32:19.071695] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:24:20.119 22:32:19 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.119 22:32:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:20.378 22:32:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:20.378 22:32:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:20.378 22:32:19 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:20.637 [2024-07-14 22:32:19.563626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:20.637 [2024-07-14 22:32:19.563689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.637 [2024-07-14 22:32:19.563727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:20.637 [2024-07-14 22:32:19.563756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.637 [2024-07-14 22:32:19.566002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.637 [2024-07-14 22:32:19.566053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:20.637 [2024-07-14 22:32:19.566156] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:20.637 [2024-07-14 22:32:19.566261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:20.637 pt1 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.637 22:32:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.896 22:32:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.896 "name": "raid_bdev1", 00:24:20.896 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:20.896 "strip_size_kb": 64, 00:24:20.896 "state": "configuring", 00:24:20.896 "raid_level": "raid5f", 00:24:20.896 "superblock": true, 00:24:20.896 "num_base_bdevs": 3, 00:24:20.896 "num_base_bdevs_discovered": 1, 00:24:20.896 "num_base_bdevs_operational": 3, 00:24:20.896 "base_bdevs_list": [ 00:24:20.896 { 00:24:20.896 "name": "pt1", 00:24:20.896 "uuid": "f2087616-1ae6-5b5a-91ac-cbeef483f1c5", 00:24:20.896 "is_configured": true, 00:24:20.896 "data_offset": 2048, 00:24:20.896 "data_size": 63488 00:24:20.896 }, 00:24:20.896 { 00:24:20.896 "name": null, 00:24:20.896 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:20.896 "is_configured": false, 00:24:20.896 "data_offset": 2048, 00:24:20.896 "data_size": 63488 00:24:20.896 }, 00:24:20.896 { 00:24:20.896 "name": null, 00:24:20.896 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:20.896 "is_configured": false, 00:24:20.896 "data_offset": 2048, 00:24:20.896 "data_size": 63488 00:24:20.896 } 00:24:20.896 ] 00:24:20.896 }' 00:24:20.896 22:32:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.896 22:32:19 -- common/autotest_common.sh@10 -- # set +x 00:24:21.463 22:32:20 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:21.463 22:32:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:21.463 22:32:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:21.722 22:32:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:21.722 22:32:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:21.722 22:32:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:21.988 22:32:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:21.988 22:32:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:21.988 22:32:20 -- bdev/bdev_raid.sh@489 -- # i=2 00:24:21.988 22:32:20 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:21.988 [2024-07-14 22:32:21.027950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:21.988 [2024-07-14 22:32:21.028008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.988 [2024-07-14 22:32:21.028041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:21.988 [2024-07-14 22:32:21.028074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.988 [2024-07-14 22:32:21.028448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.988 [2024-07-14 22:32:21.028491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:21.988 [2024-07-14 22:32:21.028580] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:21.988 [2024-07-14 22:32:21.028595] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:21.988 [2024-07-14 22:32:21.028602] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:21.988 [2024-07-14 22:32:21.028618] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:24:21.988 [2024-07-14 22:32:21.028684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:21.988 pt3 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.988 22:32:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.246 22:32:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.246 "name": "raid_bdev1", 00:24:22.246 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:22.246 "strip_size_kb": 64, 00:24:22.246 "state": "configuring", 00:24:22.246 "raid_level": "raid5f", 00:24:22.246 "superblock": true, 00:24:22.246 "num_base_bdevs": 3, 00:24:22.246 "num_base_bdevs_discovered": 1, 00:24:22.246 "num_base_bdevs_operational": 2, 00:24:22.246 "base_bdevs_list": [ 00:24:22.246 { 00:24:22.246 "name": null, 00:24:22.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.246 "is_configured": false, 00:24:22.246 "data_offset": 2048, 00:24:22.246 "data_size": 63488 00:24:22.246 }, 00:24:22.246 { 00:24:22.246 "name": null, 00:24:22.246 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:22.246 "is_configured": false, 00:24:22.246 "data_offset": 2048, 00:24:22.246 "data_size": 63488 00:24:22.246 }, 00:24:22.246 { 00:24:22.246 "name": "pt3", 00:24:22.246 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:22.246 "is_configured": true, 00:24:22.246 "data_offset": 2048, 00:24:22.246 "data_size": 63488 00:24:22.246 } 00:24:22.246 ] 00:24:22.246 }' 00:24:22.246 22:32:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.246 22:32:21 -- common/autotest_common.sh@10 -- # set +x 00:24:22.812 22:32:21 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:22.812 22:32:21 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:22.812 22:32:21 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:23.070 [2024-07-14 22:32:22.024556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:23.070 [2024-07-14 22:32:22.024623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.070 [2024-07-14 22:32:22.024655] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:23.070 [2024-07-14 22:32:22.024676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.070 [2024-07-14 22:32:22.025045] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.070 [2024-07-14 22:32:22.025080] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:23.070 [2024-07-14 22:32:22.025155] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:23.070 [2024-07-14 22:32:22.025205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:23.070 [2024-07-14 22:32:22.025326] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:24:23.070 [2024-07-14 22:32:22.025340] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:23.070 [2024-07-14 22:32:22.025443] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:23.070 [2024-07-14 22:32:22.029521] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:24:23.070 [2024-07-14 22:32:22.029544] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:24:23.070 pt2 00:24:23.070 [2024-07-14 22:32:22.029769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.070 22:32:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.329 22:32:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.329 "name": "raid_bdev1", 00:24:23.329 "uuid": "8995e239-2920-45cb-82c7-d75336a910a2", 00:24:23.329 "strip_size_kb": 64, 00:24:23.329 "state": "online", 00:24:23.329 "raid_level": "raid5f", 00:24:23.329 "superblock": true, 00:24:23.329 "num_base_bdevs": 3, 00:24:23.329 "num_base_bdevs_discovered": 2, 00:24:23.329 "num_base_bdevs_operational": 2, 00:24:23.329 "base_bdevs_list": [ 00:24:23.329 { 00:24:23.329 "name": null, 00:24:23.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.329 "is_configured": false, 00:24:23.329 "data_offset": 2048, 00:24:23.329 "data_size": 63488 00:24:23.329 }, 00:24:23.329 { 00:24:23.329 "name": "pt2", 00:24:23.329 "uuid": "a2de61a2-d8a1-55a4-a783-abe0a9d377b0", 00:24:23.329 "is_configured": true, 00:24:23.329 "data_offset": 2048, 00:24:23.329 "data_size": 63488 00:24:23.329 }, 00:24:23.329 { 00:24:23.329 "name": "pt3", 00:24:23.329 "uuid": "290b1a65-8f9c-5cb0-81c0-17bc32039f75", 00:24:23.329 "is_configured": true, 00:24:23.329 "data_offset": 2048, 00:24:23.329 "data_size": 63488 00:24:23.329 } 00:24:23.329 ] 00:24:23.329 }' 00:24:23.329 22:32:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.329 22:32:22 -- common/autotest_common.sh@10 -- # set +x 00:24:23.896 22:32:22 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:23.896 22:32:22 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:24.155 [2024-07-14 22:32:23.070444] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:24.155 22:32:23 -- bdev/bdev_raid.sh@506 -- # '[' 8995e239-2920-45cb-82c7-d75336a910a2 '!=' 8995e239-2920-45cb-82c7-d75336a910a2 ']' 00:24:24.155 22:32:23 -- bdev/bdev_raid.sh@511 -- # killprocess 131276 00:24:24.155 22:32:23 -- common/autotest_common.sh@926 -- # '[' -z 131276 ']' 00:24:24.155 22:32:23 -- common/autotest_common.sh@930 -- # kill -0 131276 00:24:24.155 22:32:23 -- common/autotest_common.sh@931 -- # uname 00:24:24.155 22:32:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:24.155 22:32:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131276 00:24:24.155 22:32:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:24.155 killing process with pid 131276 00:24:24.155 22:32:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:24.155 22:32:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131276' 00:24:24.155 22:32:23 -- common/autotest_common.sh@945 -- # kill 131276 00:24:24.155 22:32:23 -- common/autotest_common.sh@950 -- # wait 131276 00:24:24.155 [2024-07-14 22:32:23.103734] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.155 [2024-07-14 22:32:23.103833] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.155 [2024-07-14 22:32:23.103892] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.155 [2024-07-14 22:32:23.103903] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:24:24.412 [2024-07-14 22:32:23.291642] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.344 ************************************ 00:24:25.344 END TEST raid5f_superblock_test 00:24:25.344 ************************************ 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:24:25.345 00:24:25.345 real 0m18.795s 00:24:25.345 user 0m34.931s 00:24:25.345 sys 0m2.015s 00:24:25.345 22:32:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.345 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:24:25.345 22:32:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:25.345 22:32:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:25.345 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 ************************************ 00:24:25.345 START TEST raid5f_rebuild_test 00:24:25.345 ************************************ 00:24:25.345 22:32:24 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=131888 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131888 /var/tmp/spdk-raid.sock 00:24:25.345 22:32:24 -- common/autotest_common.sh@819 -- # '[' -z 131888 ']' 00:24:25.345 22:32:24 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:25.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:25.345 22:32:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:25.345 22:32:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:25.345 22:32:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:25.345 22:32:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:25.345 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:25.345 Zero copy mechanism will not be used. 00:24:25.345 [2024-07-14 22:32:24.316114] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:25.345 [2024-07-14 22:32:24.316311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131888 ] 00:24:25.602 [2024-07-14 22:32:24.484414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.860 [2024-07-14 22:32:24.707787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.860 [2024-07-14 22:32:24.869316] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.425 22:32:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:26.425 22:32:25 -- common/autotest_common.sh@852 -- # return 0 00:24:26.425 22:32:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:26.425 22:32:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:26.425 22:32:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:26.683 BaseBdev1 00:24:26.683 22:32:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:26.683 22:32:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:26.683 22:32:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:26.941 BaseBdev2 00:24:26.941 22:32:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:26.941 22:32:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:26.941 22:32:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:26.941 BaseBdev3 00:24:26.941 22:32:25 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:27.199 spare_malloc 00:24:27.199 22:32:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:27.456 spare_delay 00:24:27.456 22:32:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:27.714 [2024-07-14 22:32:26.579445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:27.714 [2024-07-14 22:32:26.579510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.714 [2024-07-14 22:32:26.579541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:27.714 [2024-07-14 22:32:26.579578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.714 [2024-07-14 22:32:26.581433] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.714 [2024-07-14 22:32:26.581473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:27.714 spare 00:24:27.714 22:32:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:27.714 [2024-07-14 22:32:26.771507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.714 [2024-07-14 22:32:26.773209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:27.714 [2024-07-14 22:32:26.773257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:27.714 [2024-07-14 22:32:26.773336] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:24:27.714 [2024-07-14 22:32:26.773350] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:27.714 [2024-07-14 22:32:26.773491] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:27.714 [2024-07-14 22:32:26.777643] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:24:27.714 [2024-07-14 22:32:26.777666] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:24:27.714 [2024-07-14 22:32:26.777844] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.972 "name": "raid_bdev1", 00:24:27.972 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:27.972 "strip_size_kb": 64, 00:24:27.972 "state": "online", 00:24:27.972 "raid_level": "raid5f", 00:24:27.972 "superblock": false, 00:24:27.972 "num_base_bdevs": 3, 00:24:27.972 "num_base_bdevs_discovered": 3, 00:24:27.972 "num_base_bdevs_operational": 3, 00:24:27.972 "base_bdevs_list": [ 00:24:27.972 { 00:24:27.972 "name": "BaseBdev1", 00:24:27.972 "uuid": "24cffbfb-1e6a-4119-a3f7-40922ceb3488", 00:24:27.972 "is_configured": true, 00:24:27.972 "data_offset": 0, 00:24:27.972 "data_size": 65536 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "name": "BaseBdev2", 00:24:27.972 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:27.972 "is_configured": true, 00:24:27.972 "data_offset": 0, 00:24:27.972 "data_size": 65536 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "name": "BaseBdev3", 00:24:27.972 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:27.972 "is_configured": true, 00:24:27.972 "data_offset": 0, 00:24:27.972 "data_size": 65536 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }' 00:24:27.972 22:32:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.972 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.905 22:32:27 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:28.905 22:32:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:28.905 [2024-07-14 22:32:27.846644] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.905 22:32:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:24:28.905 22:32:27 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.905 22:32:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:29.163 22:32:28 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:29.163 22:32:28 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:29.163 22:32:28 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:29.163 22:32:28 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@12 -- # local i 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:29.163 22:32:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:29.163 [2024-07-14 22:32:28.218639] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:29.422 /dev/nbd0 00:24:29.422 22:32:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:29.422 22:32:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:29.422 22:32:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:29.422 22:32:28 -- common/autotest_common.sh@857 -- # local i 00:24:29.422 22:32:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:29.422 22:32:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:29.422 22:32:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:29.422 22:32:28 -- common/autotest_common.sh@861 -- # break 00:24:29.422 22:32:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:29.422 22:32:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:29.422 22:32:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.422 1+0 records in 00:24:29.422 1+0 records out 00:24:29.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209264 s, 19.6 MB/s 00:24:29.422 22:32:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.422 22:32:28 -- common/autotest_common.sh@874 -- # size=4096 00:24:29.422 22:32:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.422 22:32:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:29.422 22:32:28 -- common/autotest_common.sh@877 -- # return 0 00:24:29.422 22:32:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.422 22:32:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:29.422 22:32:28 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:29.422 22:32:28 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:29.422 22:32:28 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:29.422 22:32:28 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:24:29.680 512+0 records in 00:24:29.680 512+0 records out 00:24:29.680 67108864 bytes (67 MB, 64 MiB) copied, 0.352643 s, 190 MB/s 00:24:29.680 22:32:28 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:29.680 22:32:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:29.680 22:32:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:29.680 22:32:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.680 22:32:28 -- bdev/nbd_common.sh@51 -- # local i 00:24:29.680 22:32:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.680 22:32:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.939 [2024-07-14 22:32:28.907547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@41 -- # break 00:24:29.939 22:32:28 -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.939 22:32:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:30.197 [2024-07-14 22:32:29.080793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.197 22:32:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.455 22:32:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.455 "name": "raid_bdev1", 00:24:30.455 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:30.455 "strip_size_kb": 64, 00:24:30.455 "state": "online", 00:24:30.455 "raid_level": "raid5f", 00:24:30.455 "superblock": false, 00:24:30.455 "num_base_bdevs": 3, 00:24:30.455 "num_base_bdevs_discovered": 2, 00:24:30.455 "num_base_bdevs_operational": 2, 00:24:30.455 "base_bdevs_list": [ 00:24:30.455 { 00:24:30.455 "name": null, 00:24:30.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.455 "is_configured": false, 00:24:30.455 "data_offset": 0, 00:24:30.455 "data_size": 65536 00:24:30.455 }, 00:24:30.455 { 00:24:30.455 "name": "BaseBdev2", 00:24:30.455 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:30.455 "is_configured": true, 00:24:30.455 "data_offset": 0, 00:24:30.455 "data_size": 65536 00:24:30.455 }, 00:24:30.455 { 00:24:30.455 "name": "BaseBdev3", 00:24:30.455 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:30.455 "is_configured": true, 00:24:30.455 "data_offset": 0, 00:24:30.455 "data_size": 65536 00:24:30.455 } 00:24:30.455 ] 00:24:30.455 }' 00:24:30.455 22:32:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.455 22:32:29 -- common/autotest_common.sh@10 -- # set +x 00:24:31.021 22:32:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:31.279 [2024-07-14 22:32:30.200996] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:31.279 [2024-07-14 22:32:30.201055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.279 [2024-07-14 22:32:30.211854] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:24:31.279 [2024-07-14 22:32:30.217207] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:31.279 22:32:30 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.214 22:32:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.471 22:32:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:32.471 "name": "raid_bdev1", 00:24:32.471 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:32.471 "strip_size_kb": 64, 00:24:32.471 "state": "online", 00:24:32.471 "raid_level": "raid5f", 00:24:32.471 "superblock": false, 00:24:32.471 "num_base_bdevs": 3, 00:24:32.471 "num_base_bdevs_discovered": 3, 00:24:32.471 "num_base_bdevs_operational": 3, 00:24:32.471 "process": { 00:24:32.471 "type": "rebuild", 00:24:32.471 "target": "spare", 00:24:32.471 "progress": { 00:24:32.471 "blocks": 22528, 00:24:32.471 "percent": 17 00:24:32.471 } 00:24:32.471 }, 00:24:32.471 "base_bdevs_list": [ 00:24:32.471 { 00:24:32.471 "name": "spare", 00:24:32.471 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:32.471 "is_configured": true, 00:24:32.471 "data_offset": 0, 00:24:32.471 "data_size": 65536 00:24:32.471 }, 00:24:32.471 { 00:24:32.471 "name": "BaseBdev2", 00:24:32.471 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:32.471 "is_configured": true, 00:24:32.471 "data_offset": 0, 00:24:32.471 "data_size": 65536 00:24:32.471 }, 00:24:32.471 { 00:24:32.471 "name": "BaseBdev3", 00:24:32.471 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:32.471 "is_configured": true, 00:24:32.471 "data_offset": 0, 00:24:32.471 "data_size": 65536 00:24:32.471 } 00:24:32.471 ] 00:24:32.471 }' 00:24:32.471 22:32:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:32.471 22:32:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.471 22:32:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:32.471 22:32:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.471 22:32:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:32.729 [2024-07-14 22:32:31.710224] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.729 [2024-07-14 22:32:31.728576] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:32.729 [2024-07-14 22:32:31.728648] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.729 22:32:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.986 22:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.986 "name": "raid_bdev1", 00:24:32.986 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:32.986 "strip_size_kb": 64, 00:24:32.986 "state": "online", 00:24:32.986 "raid_level": "raid5f", 00:24:32.986 "superblock": false, 00:24:32.986 "num_base_bdevs": 3, 00:24:32.986 "num_base_bdevs_discovered": 2, 00:24:32.986 "num_base_bdevs_operational": 2, 00:24:32.986 "base_bdevs_list": [ 00:24:32.986 { 00:24:32.986 "name": null, 00:24:32.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.986 "is_configured": false, 00:24:32.986 "data_offset": 0, 00:24:32.986 "data_size": 65536 00:24:32.986 }, 00:24:32.986 { 00:24:32.986 "name": "BaseBdev2", 00:24:32.986 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:32.986 "is_configured": true, 00:24:32.986 "data_offset": 0, 00:24:32.986 "data_size": 65536 00:24:32.986 }, 00:24:32.986 { 00:24:32.986 "name": "BaseBdev3", 00:24:32.986 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:32.986 "is_configured": true, 00:24:32.986 "data_offset": 0, 00:24:32.986 "data_size": 65536 00:24:32.986 } 00:24:32.986 ] 00:24:32.986 }' 00:24:32.986 22:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.986 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:33.918 "name": "raid_bdev1", 00:24:33.918 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:33.918 "strip_size_kb": 64, 00:24:33.918 "state": "online", 00:24:33.918 "raid_level": "raid5f", 00:24:33.918 "superblock": false, 00:24:33.918 "num_base_bdevs": 3, 00:24:33.918 "num_base_bdevs_discovered": 2, 00:24:33.918 "num_base_bdevs_operational": 2, 00:24:33.918 "base_bdevs_list": [ 00:24:33.918 { 00:24:33.918 "name": null, 00:24:33.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.918 "is_configured": false, 00:24:33.918 "data_offset": 0, 00:24:33.918 "data_size": 65536 00:24:33.918 }, 00:24:33.918 { 00:24:33.918 "name": "BaseBdev2", 00:24:33.918 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:33.918 "is_configured": true, 00:24:33.918 "data_offset": 0, 00:24:33.918 "data_size": 65536 00:24:33.918 }, 00:24:33.918 { 00:24:33.918 "name": "BaseBdev3", 00:24:33.918 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:33.918 "is_configured": true, 00:24:33.918 "data_offset": 0, 00:24:33.918 "data_size": 65536 00:24:33.918 } 00:24:33.918 ] 00:24:33.918 }' 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:33.918 22:32:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:34.186 [2024-07-14 22:32:33.148624] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:34.186 [2024-07-14 22:32:33.148663] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:34.186 [2024-07-14 22:32:33.158669] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:24:34.186 [2024-07-14 22:32:33.163750] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:34.186 22:32:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.148 22:32:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.406 22:32:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.406 "name": "raid_bdev1", 00:24:35.406 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:35.406 "strip_size_kb": 64, 00:24:35.406 "state": "online", 00:24:35.406 "raid_level": "raid5f", 00:24:35.406 "superblock": false, 00:24:35.406 "num_base_bdevs": 3, 00:24:35.406 "num_base_bdevs_discovered": 3, 00:24:35.406 "num_base_bdevs_operational": 3, 00:24:35.406 "process": { 00:24:35.406 "type": "rebuild", 00:24:35.406 "target": "spare", 00:24:35.406 "progress": { 00:24:35.406 "blocks": 24576, 00:24:35.406 "percent": 18 00:24:35.406 } 00:24:35.406 }, 00:24:35.406 "base_bdevs_list": [ 00:24:35.406 { 00:24:35.406 "name": "spare", 00:24:35.406 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:35.406 "is_configured": true, 00:24:35.406 "data_offset": 0, 00:24:35.406 "data_size": 65536 00:24:35.406 }, 00:24:35.406 { 00:24:35.406 "name": "BaseBdev2", 00:24:35.406 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:35.406 "is_configured": true, 00:24:35.406 "data_offset": 0, 00:24:35.406 "data_size": 65536 00:24:35.406 }, 00:24:35.406 { 00:24:35.406 "name": "BaseBdev3", 00:24:35.406 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:35.406 "is_configured": true, 00:24:35.406 "data_offset": 0, 00:24:35.406 "data_size": 65536 00:24:35.406 } 00:24:35.406 ] 00:24:35.406 }' 00:24:35.406 22:32:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.406 22:32:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.406 22:32:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@657 -- # local timeout=611 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.663 22:32:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.920 22:32:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.920 "name": "raid_bdev1", 00:24:35.920 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:35.920 "strip_size_kb": 64, 00:24:35.920 "state": "online", 00:24:35.920 "raid_level": "raid5f", 00:24:35.920 "superblock": false, 00:24:35.920 "num_base_bdevs": 3, 00:24:35.920 "num_base_bdevs_discovered": 3, 00:24:35.920 "num_base_bdevs_operational": 3, 00:24:35.920 "process": { 00:24:35.920 "type": "rebuild", 00:24:35.920 "target": "spare", 00:24:35.920 "progress": { 00:24:35.920 "blocks": 30720, 00:24:35.920 "percent": 23 00:24:35.920 } 00:24:35.920 }, 00:24:35.920 "base_bdevs_list": [ 00:24:35.920 { 00:24:35.920 "name": "spare", 00:24:35.920 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:35.920 "is_configured": true, 00:24:35.920 "data_offset": 0, 00:24:35.920 "data_size": 65536 00:24:35.920 }, 00:24:35.920 { 00:24:35.920 "name": "BaseBdev2", 00:24:35.920 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:35.920 "is_configured": true, 00:24:35.920 "data_offset": 0, 00:24:35.920 "data_size": 65536 00:24:35.920 }, 00:24:35.920 { 00:24:35.920 "name": "BaseBdev3", 00:24:35.920 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:35.920 "is_configured": true, 00:24:35.920 "data_offset": 0, 00:24:35.920 "data_size": 65536 00:24:35.920 } 00:24:35.920 ] 00:24:35.920 }' 00:24:35.920 22:32:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.920 22:32:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.920 22:32:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.920 22:32:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.920 22:32:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.852 22:32:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.109 22:32:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:37.109 "name": "raid_bdev1", 00:24:37.109 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:37.109 "strip_size_kb": 64, 00:24:37.109 "state": "online", 00:24:37.109 "raid_level": "raid5f", 00:24:37.109 "superblock": false, 00:24:37.109 "num_base_bdevs": 3, 00:24:37.109 "num_base_bdevs_discovered": 3, 00:24:37.109 "num_base_bdevs_operational": 3, 00:24:37.109 "process": { 00:24:37.109 "type": "rebuild", 00:24:37.109 "target": "spare", 00:24:37.109 "progress": { 00:24:37.109 "blocks": 59392, 00:24:37.109 "percent": 45 00:24:37.109 } 00:24:37.109 }, 00:24:37.109 "base_bdevs_list": [ 00:24:37.109 { 00:24:37.109 "name": "spare", 00:24:37.109 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:37.109 "is_configured": true, 00:24:37.109 "data_offset": 0, 00:24:37.109 "data_size": 65536 00:24:37.109 }, 00:24:37.109 { 00:24:37.109 "name": "BaseBdev2", 00:24:37.109 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:37.109 "is_configured": true, 00:24:37.109 "data_offset": 0, 00:24:37.109 "data_size": 65536 00:24:37.109 }, 00:24:37.109 { 00:24:37.109 "name": "BaseBdev3", 00:24:37.109 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:37.109 "is_configured": true, 00:24:37.109 "data_offset": 0, 00:24:37.109 "data_size": 65536 00:24:37.109 } 00:24:37.109 ] 00:24:37.109 }' 00:24:37.109 22:32:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:37.366 22:32:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.366 22:32:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:37.366 22:32:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.366 22:32:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.297 22:32:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.555 22:32:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:38.555 "name": "raid_bdev1", 00:24:38.555 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:38.555 "strip_size_kb": 64, 00:24:38.555 "state": "online", 00:24:38.555 "raid_level": "raid5f", 00:24:38.555 "superblock": false, 00:24:38.555 "num_base_bdevs": 3, 00:24:38.555 "num_base_bdevs_discovered": 3, 00:24:38.555 "num_base_bdevs_operational": 3, 00:24:38.555 "process": { 00:24:38.555 "type": "rebuild", 00:24:38.555 "target": "spare", 00:24:38.555 "progress": { 00:24:38.555 "blocks": 86016, 00:24:38.555 "percent": 65 00:24:38.555 } 00:24:38.555 }, 00:24:38.555 "base_bdevs_list": [ 00:24:38.555 { 00:24:38.555 "name": "spare", 00:24:38.555 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:38.555 "is_configured": true, 00:24:38.555 "data_offset": 0, 00:24:38.555 "data_size": 65536 00:24:38.555 }, 00:24:38.555 { 00:24:38.555 "name": "BaseBdev2", 00:24:38.555 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:38.555 "is_configured": true, 00:24:38.555 "data_offset": 0, 00:24:38.555 "data_size": 65536 00:24:38.555 }, 00:24:38.555 { 00:24:38.555 "name": "BaseBdev3", 00:24:38.555 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:38.555 "is_configured": true, 00:24:38.555 "data_offset": 0, 00:24:38.555 "data_size": 65536 00:24:38.555 } 00:24:38.555 ] 00:24:38.555 }' 00:24:38.555 22:32:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:38.555 22:32:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.555 22:32:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:38.555 22:32:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.555 22:32:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:39.930 "name": "raid_bdev1", 00:24:39.930 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:39.930 "strip_size_kb": 64, 00:24:39.930 "state": "online", 00:24:39.930 "raid_level": "raid5f", 00:24:39.930 "superblock": false, 00:24:39.930 "num_base_bdevs": 3, 00:24:39.930 "num_base_bdevs_discovered": 3, 00:24:39.930 "num_base_bdevs_operational": 3, 00:24:39.930 "process": { 00:24:39.930 "type": "rebuild", 00:24:39.930 "target": "spare", 00:24:39.930 "progress": { 00:24:39.930 "blocks": 114688, 00:24:39.930 "percent": 87 00:24:39.930 } 00:24:39.930 }, 00:24:39.930 "base_bdevs_list": [ 00:24:39.930 { 00:24:39.930 "name": "spare", 00:24:39.930 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:39.930 "is_configured": true, 00:24:39.930 "data_offset": 0, 00:24:39.930 "data_size": 65536 00:24:39.930 }, 00:24:39.930 { 00:24:39.930 "name": "BaseBdev2", 00:24:39.930 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:39.930 "is_configured": true, 00:24:39.930 "data_offset": 0, 00:24:39.930 "data_size": 65536 00:24:39.930 }, 00:24:39.930 { 00:24:39.930 "name": "BaseBdev3", 00:24:39.930 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:39.930 "is_configured": true, 00:24:39.930 "data_offset": 0, 00:24:39.930 "data_size": 65536 00:24:39.930 } 00:24:39.930 ] 00:24:39.930 }' 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.930 22:32:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:40.866 [2024-07-14 22:32:39.609670] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:40.866 [2024-07-14 22:32:39.609745] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:40.866 [2024-07-14 22:32:39.609807] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.126 22:32:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.126 22:32:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:41.126 "name": "raid_bdev1", 00:24:41.126 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:41.126 "strip_size_kb": 64, 00:24:41.126 "state": "online", 00:24:41.126 "raid_level": "raid5f", 00:24:41.126 "superblock": false, 00:24:41.126 "num_base_bdevs": 3, 00:24:41.126 "num_base_bdevs_discovered": 3, 00:24:41.126 "num_base_bdevs_operational": 3, 00:24:41.126 "base_bdevs_list": [ 00:24:41.126 { 00:24:41.126 "name": "spare", 00:24:41.126 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:41.126 "is_configured": true, 00:24:41.126 "data_offset": 0, 00:24:41.126 "data_size": 65536 00:24:41.126 }, 00:24:41.126 { 00:24:41.126 "name": "BaseBdev2", 00:24:41.126 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:41.126 "is_configured": true, 00:24:41.126 "data_offset": 0, 00:24:41.126 "data_size": 65536 00:24:41.126 }, 00:24:41.126 { 00:24:41.126 "name": "BaseBdev3", 00:24:41.126 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:41.126 "is_configured": true, 00:24:41.126 "data_offset": 0, 00:24:41.126 "data_size": 65536 00:24:41.126 } 00:24:41.126 ] 00:24:41.126 }' 00:24:41.126 22:32:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@660 -- # break 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.385 22:32:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:41.644 "name": "raid_bdev1", 00:24:41.644 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:41.644 "strip_size_kb": 64, 00:24:41.644 "state": "online", 00:24:41.644 "raid_level": "raid5f", 00:24:41.644 "superblock": false, 00:24:41.644 "num_base_bdevs": 3, 00:24:41.644 "num_base_bdevs_discovered": 3, 00:24:41.644 "num_base_bdevs_operational": 3, 00:24:41.644 "base_bdevs_list": [ 00:24:41.644 { 00:24:41.644 "name": "spare", 00:24:41.644 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:41.644 "is_configured": true, 00:24:41.644 "data_offset": 0, 00:24:41.644 "data_size": 65536 00:24:41.644 }, 00:24:41.644 { 00:24:41.644 "name": "BaseBdev2", 00:24:41.644 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:41.644 "is_configured": true, 00:24:41.644 "data_offset": 0, 00:24:41.644 "data_size": 65536 00:24:41.644 }, 00:24:41.644 { 00:24:41.644 "name": "BaseBdev3", 00:24:41.644 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:41.644 "is_configured": true, 00:24:41.644 "data_offset": 0, 00:24:41.644 "data_size": 65536 00:24:41.644 } 00:24:41.644 ] 00:24:41.644 }' 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.644 22:32:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.903 22:32:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.903 "name": "raid_bdev1", 00:24:41.903 "uuid": "5bdba7eb-c8d6-45f5-96b0-ea00da9292f5", 00:24:41.903 "strip_size_kb": 64, 00:24:41.903 "state": "online", 00:24:41.903 "raid_level": "raid5f", 00:24:41.903 "superblock": false, 00:24:41.903 "num_base_bdevs": 3, 00:24:41.903 "num_base_bdevs_discovered": 3, 00:24:41.903 "num_base_bdevs_operational": 3, 00:24:41.903 "base_bdevs_list": [ 00:24:41.903 { 00:24:41.903 "name": "spare", 00:24:41.903 "uuid": "260bd409-b4d5-5068-985d-1bbb2452db55", 00:24:41.903 "is_configured": true, 00:24:41.903 "data_offset": 0, 00:24:41.903 "data_size": 65536 00:24:41.903 }, 00:24:41.903 { 00:24:41.903 "name": "BaseBdev2", 00:24:41.903 "uuid": "5458b45d-70fc-4c18-abea-d0fe0b46635d", 00:24:41.903 "is_configured": true, 00:24:41.903 "data_offset": 0, 00:24:41.903 "data_size": 65536 00:24:41.903 }, 00:24:41.903 { 00:24:41.903 "name": "BaseBdev3", 00:24:41.903 "uuid": "bf306c18-e0de-45f3-b56d-dad0bc0803ee", 00:24:41.903 "is_configured": true, 00:24:41.903 "data_offset": 0, 00:24:41.903 "data_size": 65536 00:24:41.903 } 00:24:41.903 ] 00:24:41.903 }' 00:24:41.903 22:32:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.903 22:32:40 -- common/autotest_common.sh@10 -- # set +x 00:24:42.471 22:32:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:42.730 [2024-07-14 22:32:41.741389] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:42.730 [2024-07-14 22:32:41.741415] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:42.730 [2024-07-14 22:32:41.741482] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:42.730 [2024-07-14 22:32:41.741551] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:42.730 [2024-07-14 22:32:41.741564] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:24:42.730 22:32:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.730 22:32:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:42.989 22:32:41 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:42.989 22:32:41 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:42.989 22:32:41 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:42.989 22:32:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:42.989 22:32:41 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:42.989 22:32:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:42.990 22:32:41 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:42.990 22:32:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:42.990 22:32:41 -- bdev/nbd_common.sh@12 -- # local i 00:24:42.990 22:32:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:42.990 22:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.990 22:32:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:43.249 /dev/nbd0 00:24:43.249 22:32:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:43.249 22:32:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:43.249 22:32:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:43.249 22:32:42 -- common/autotest_common.sh@857 -- # local i 00:24:43.249 22:32:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:43.249 22:32:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:43.249 22:32:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:43.249 22:32:42 -- common/autotest_common.sh@861 -- # break 00:24:43.249 22:32:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:43.249 22:32:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:43.249 22:32:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:43.249 1+0 records in 00:24:43.249 1+0 records out 00:24:43.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500049 s, 8.2 MB/s 00:24:43.249 22:32:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.249 22:32:42 -- common/autotest_common.sh@874 -- # size=4096 00:24:43.249 22:32:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.249 22:32:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:43.249 22:32:42 -- common/autotest_common.sh@877 -- # return 0 00:24:43.249 22:32:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:43.249 22:32:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:43.249 22:32:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:43.508 /dev/nbd1 00:24:43.508 22:32:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:43.508 22:32:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:43.508 22:32:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:43.508 22:32:42 -- common/autotest_common.sh@857 -- # local i 00:24:43.508 22:32:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:43.508 22:32:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:43.508 22:32:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:43.508 22:32:42 -- common/autotest_common.sh@861 -- # break 00:24:43.508 22:32:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:43.508 22:32:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:43.508 22:32:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:43.508 1+0 records in 00:24:43.508 1+0 records out 00:24:43.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500942 s, 8.2 MB/s 00:24:43.508 22:32:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.508 22:32:42 -- common/autotest_common.sh@874 -- # size=4096 00:24:43.508 22:32:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.508 22:32:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:43.508 22:32:42 -- common/autotest_common.sh@877 -- # return 0 00:24:43.508 22:32:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:43.508 22:32:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:43.508 22:32:42 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:43.767 22:32:42 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@51 -- # local i 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:43.767 22:32:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@41 -- # break 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@45 -- # return 0 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:44.026 22:32:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@41 -- # break 00:24:44.285 22:32:43 -- bdev/nbd_common.sh@45 -- # return 0 00:24:44.285 22:32:43 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:44.285 22:32:43 -- bdev/bdev_raid.sh@709 -- # killprocess 131888 00:24:44.285 22:32:43 -- common/autotest_common.sh@926 -- # '[' -z 131888 ']' 00:24:44.285 22:32:43 -- common/autotest_common.sh@930 -- # kill -0 131888 00:24:44.285 22:32:43 -- common/autotest_common.sh@931 -- # uname 00:24:44.285 22:32:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:44.285 22:32:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131888 00:24:44.285 22:32:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:44.285 22:32:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:44.285 22:32:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131888' 00:24:44.285 killing process with pid 131888 00:24:44.285 22:32:43 -- common/autotest_common.sh@945 -- # kill 131888 00:24:44.285 Received shutdown signal, test time was about 60.000000 seconds 00:24:44.285 00:24:44.285 Latency(us) 00:24:44.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.285 =================================================================================================================== 00:24:44.285 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:44.285 22:32:43 -- common/autotest_common.sh@950 -- # wait 131888 00:24:44.285 [2024-07-14 22:32:43.258785] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:44.544 [2024-07-14 22:32:43.507627] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:45.479 ************************************ 00:24:45.479 END TEST raid5f_rebuild_test 00:24:45.479 ************************************ 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:45.479 00:24:45.479 real 0m20.160s 00:24:45.479 user 0m30.450s 00:24:45.479 sys 0m2.224s 00:24:45.479 22:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.479 22:32:44 -- common/autotest_common.sh@10 -- # set +x 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:24:45.479 22:32:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:45.479 22:32:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:45.479 22:32:44 -- common/autotest_common.sh@10 -- # set +x 00:24:45.479 ************************************ 00:24:45.479 START TEST raid5f_rebuild_test_sb 00:24:45.479 ************************************ 00:24:45.479 22:32:44 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@544 -- # raid_pid=132456 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132456 /var/tmp/spdk-raid.sock 00:24:45.479 22:32:44 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:45.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:45.479 22:32:44 -- common/autotest_common.sh@819 -- # '[' -z 132456 ']' 00:24:45.479 22:32:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:45.479 22:32:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:45.479 22:32:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:45.479 22:32:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:45.479 22:32:44 -- common/autotest_common.sh@10 -- # set +x 00:24:45.479 [2024-07-14 22:32:44.537429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:45.479 [2024-07-14 22:32:44.537609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132456 ] 00:24:45.479 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:45.479 Zero copy mechanism will not be used. 00:24:45.743 [2024-07-14 22:32:44.703592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.001 [2024-07-14 22:32:44.858695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.001 [2024-07-14 22:32:45.020215] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.569 22:32:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:46.569 22:32:45 -- common/autotest_common.sh@852 -- # return 0 00:24:46.569 22:32:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:46.569 22:32:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:46.569 22:32:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:46.828 BaseBdev1_malloc 00:24:46.828 22:32:45 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:47.087 [2024-07-14 22:32:45.950938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:47.087 [2024-07-14 22:32:45.951022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.087 [2024-07-14 22:32:45.951049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:47.087 [2024-07-14 22:32:45.951084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.087 [2024-07-14 22:32:45.952872] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.087 [2024-07-14 22:32:45.952915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:47.087 BaseBdev1 00:24:47.087 22:32:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:47.087 22:32:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:47.087 22:32:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:47.345 BaseBdev2_malloc 00:24:47.345 22:32:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:47.345 [2024-07-14 22:32:46.372961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:47.345 [2024-07-14 22:32:46.373211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.345 [2024-07-14 22:32:46.373281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:47.345 [2024-07-14 22:32:46.373428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.345 [2024-07-14 22:32:46.375502] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.345 [2024-07-14 22:32:46.375663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:47.345 BaseBdev2 00:24:47.345 22:32:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:47.345 22:32:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:47.345 22:32:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:47.604 BaseBdev3_malloc 00:24:47.604 22:32:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:47.863 [2024-07-14 22:32:46.777735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:47.863 [2024-07-14 22:32:46.777930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.863 [2024-07-14 22:32:46.778006] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:47.863 [2024-07-14 22:32:46.778153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.863 [2024-07-14 22:32:46.780222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.863 [2024-07-14 22:32:46.780388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:47.863 BaseBdev3 00:24:47.863 22:32:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:48.121 spare_malloc 00:24:48.121 22:32:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:48.380 spare_delay 00:24:48.380 22:32:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:48.380 [2024-07-14 22:32:47.434404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:48.380 [2024-07-14 22:32:47.434673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.380 [2024-07-14 22:32:47.434735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:48.380 [2024-07-14 22:32:47.434861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.380 [2024-07-14 22:32:47.436791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.380 [2024-07-14 22:32:47.436953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:48.380 spare 00:24:48.380 22:32:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:48.638 [2024-07-14 22:32:47.674516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:48.638 [2024-07-14 22:32:47.676324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:48.638 [2024-07-14 22:32:47.676503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:48.638 [2024-07-14 22:32:47.676790] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:24:48.639 [2024-07-14 22:32:47.676898] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:48.639 [2024-07-14 22:32:47.677039] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:48.639 [2024-07-14 22:32:47.681164] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:24:48.639 [2024-07-14 22:32:47.681294] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:24:48.639 [2024-07-14 22:32:47.681532] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.639 22:32:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.898 22:32:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.898 "name": "raid_bdev1", 00:24:48.898 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:48.898 "strip_size_kb": 64, 00:24:48.898 "state": "online", 00:24:48.898 "raid_level": "raid5f", 00:24:48.898 "superblock": true, 00:24:48.898 "num_base_bdevs": 3, 00:24:48.898 "num_base_bdevs_discovered": 3, 00:24:48.898 "num_base_bdevs_operational": 3, 00:24:48.898 "base_bdevs_list": [ 00:24:48.898 { 00:24:48.898 "name": "BaseBdev1", 00:24:48.898 "uuid": "a8f5ae75-7f80-511f-9eaa-d1f4411c066c", 00:24:48.898 "is_configured": true, 00:24:48.898 "data_offset": 2048, 00:24:48.898 "data_size": 63488 00:24:48.898 }, 00:24:48.898 { 00:24:48.898 "name": "BaseBdev2", 00:24:48.898 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:48.898 "is_configured": true, 00:24:48.898 "data_offset": 2048, 00:24:48.898 "data_size": 63488 00:24:48.898 }, 00:24:48.898 { 00:24:48.898 "name": "BaseBdev3", 00:24:48.898 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:48.898 "is_configured": true, 00:24:48.898 "data_offset": 2048, 00:24:48.898 "data_size": 63488 00:24:48.898 } 00:24:48.898 ] 00:24:48.898 }' 00:24:48.898 22:32:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.898 22:32:47 -- common/autotest_common.sh@10 -- # set +x 00:24:49.466 22:32:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:49.466 22:32:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:49.725 [2024-07-14 22:32:48.694391] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:49.725 22:32:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:24:49.725 22:32:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.725 22:32:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:49.984 22:32:48 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:49.984 22:32:48 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:49.984 22:32:48 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:49.984 22:32:48 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@12 -- # local i 00:24:49.984 22:32:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.985 22:32:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.985 22:32:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:50.244 [2024-07-14 22:32:49.114387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:50.244 /dev/nbd0 00:24:50.244 22:32:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:50.244 22:32:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:50.244 22:32:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:50.244 22:32:49 -- common/autotest_common.sh@857 -- # local i 00:24:50.244 22:32:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:50.244 22:32:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:50.244 22:32:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:50.244 22:32:49 -- common/autotest_common.sh@861 -- # break 00:24:50.244 22:32:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:50.244 22:32:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:50.244 22:32:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.244 1+0 records in 00:24:50.244 1+0 records out 00:24:50.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361793 s, 11.3 MB/s 00:24:50.244 22:32:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.244 22:32:49 -- common/autotest_common.sh@874 -- # size=4096 00:24:50.244 22:32:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.244 22:32:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:50.244 22:32:49 -- common/autotest_common.sh@877 -- # return 0 00:24:50.244 22:32:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:50.244 22:32:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.244 22:32:49 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:50.244 22:32:49 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:50.244 22:32:49 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:50.244 22:32:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:24:50.503 496+0 records in 00:24:50.503 496+0 records out 00:24:50.503 65011712 bytes (65 MB, 62 MiB) copied, 0.299399 s, 217 MB/s 00:24:50.503 22:32:49 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:50.503 22:32:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.503 22:32:49 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:50.503 22:32:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.503 22:32:49 -- bdev/nbd_common.sh@51 -- # local i 00:24:50.503 22:32:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.503 22:32:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:50.762 [2024-07-14 22:32:49.704276] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@41 -- # break 00:24:50.762 22:32:49 -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.762 22:32:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:51.021 [2024-07-14 22:32:49.993246] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.021 22:32:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.279 22:32:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.279 "name": "raid_bdev1", 00:24:51.279 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:51.279 "strip_size_kb": 64, 00:24:51.279 "state": "online", 00:24:51.279 "raid_level": "raid5f", 00:24:51.279 "superblock": true, 00:24:51.279 "num_base_bdevs": 3, 00:24:51.279 "num_base_bdevs_discovered": 2, 00:24:51.279 "num_base_bdevs_operational": 2, 00:24:51.279 "base_bdevs_list": [ 00:24:51.279 { 00:24:51.279 "name": null, 00:24:51.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.279 "is_configured": false, 00:24:51.279 "data_offset": 2048, 00:24:51.279 "data_size": 63488 00:24:51.279 }, 00:24:51.279 { 00:24:51.279 "name": "BaseBdev2", 00:24:51.279 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:51.279 "is_configured": true, 00:24:51.279 "data_offset": 2048, 00:24:51.279 "data_size": 63488 00:24:51.279 }, 00:24:51.279 { 00:24:51.279 "name": "BaseBdev3", 00:24:51.279 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:51.279 "is_configured": true, 00:24:51.279 "data_offset": 2048, 00:24:51.279 "data_size": 63488 00:24:51.279 } 00:24:51.279 ] 00:24:51.279 }' 00:24:51.279 22:32:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.279 22:32:50 -- common/autotest_common.sh@10 -- # set +x 00:24:51.845 22:32:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:52.104 [2024-07-14 22:32:51.081463] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:52.104 [2024-07-14 22:32:51.081505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.104 [2024-07-14 22:32:51.092268] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002acc0 00:24:52.104 [2024-07-14 22:32:51.097844] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.104 22:32:51 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:53.477 "name": "raid_bdev1", 00:24:53.477 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:53.477 "strip_size_kb": 64, 00:24:53.477 "state": "online", 00:24:53.477 "raid_level": "raid5f", 00:24:53.477 "superblock": true, 00:24:53.477 "num_base_bdevs": 3, 00:24:53.477 "num_base_bdevs_discovered": 3, 00:24:53.477 "num_base_bdevs_operational": 3, 00:24:53.477 "process": { 00:24:53.477 "type": "rebuild", 00:24:53.477 "target": "spare", 00:24:53.477 "progress": { 00:24:53.477 "blocks": 24576, 00:24:53.477 "percent": 19 00:24:53.477 } 00:24:53.477 }, 00:24:53.477 "base_bdevs_list": [ 00:24:53.477 { 00:24:53.477 "name": "spare", 00:24:53.477 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:24:53.477 "is_configured": true, 00:24:53.477 "data_offset": 2048, 00:24:53.477 "data_size": 63488 00:24:53.477 }, 00:24:53.477 { 00:24:53.477 "name": "BaseBdev2", 00:24:53.477 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:53.477 "is_configured": true, 00:24:53.477 "data_offset": 2048, 00:24:53.477 "data_size": 63488 00:24:53.477 }, 00:24:53.477 { 00:24:53.477 "name": "BaseBdev3", 00:24:53.477 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:53.477 "is_configured": true, 00:24:53.477 "data_offset": 2048, 00:24:53.477 "data_size": 63488 00:24:53.477 } 00:24:53.477 ] 00:24:53.477 }' 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.477 22:32:52 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:53.734 [2024-07-14 22:32:52.731335] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:53.992 [2024-07-14 22:32:52.811197] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:53.992 [2024-07-14 22:32:52.811276] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.992 22:32:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.250 22:32:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.250 "name": "raid_bdev1", 00:24:54.250 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:54.250 "strip_size_kb": 64, 00:24:54.250 "state": "online", 00:24:54.250 "raid_level": "raid5f", 00:24:54.250 "superblock": true, 00:24:54.250 "num_base_bdevs": 3, 00:24:54.250 "num_base_bdevs_discovered": 2, 00:24:54.250 "num_base_bdevs_operational": 2, 00:24:54.250 "base_bdevs_list": [ 00:24:54.250 { 00:24:54.250 "name": null, 00:24:54.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.250 "is_configured": false, 00:24:54.250 "data_offset": 2048, 00:24:54.250 "data_size": 63488 00:24:54.250 }, 00:24:54.250 { 00:24:54.250 "name": "BaseBdev2", 00:24:54.250 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:54.250 "is_configured": true, 00:24:54.250 "data_offset": 2048, 00:24:54.250 "data_size": 63488 00:24:54.250 }, 00:24:54.250 { 00:24:54.250 "name": "BaseBdev3", 00:24:54.250 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:54.250 "is_configured": true, 00:24:54.250 "data_offset": 2048, 00:24:54.250 "data_size": 63488 00:24:54.250 } 00:24:54.250 ] 00:24:54.250 }' 00:24:54.250 22:32:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.250 22:32:53 -- common/autotest_common.sh@10 -- # set +x 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.815 22:32:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.073 22:32:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:55.073 "name": "raid_bdev1", 00:24:55.073 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:55.073 "strip_size_kb": 64, 00:24:55.073 "state": "online", 00:24:55.073 "raid_level": "raid5f", 00:24:55.073 "superblock": true, 00:24:55.073 "num_base_bdevs": 3, 00:24:55.073 "num_base_bdevs_discovered": 2, 00:24:55.073 "num_base_bdevs_operational": 2, 00:24:55.073 "base_bdevs_list": [ 00:24:55.073 { 00:24:55.073 "name": null, 00:24:55.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.073 "is_configured": false, 00:24:55.073 "data_offset": 2048, 00:24:55.073 "data_size": 63488 00:24:55.073 }, 00:24:55.073 { 00:24:55.073 "name": "BaseBdev2", 00:24:55.073 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:55.073 "is_configured": true, 00:24:55.073 "data_offset": 2048, 00:24:55.073 "data_size": 63488 00:24:55.073 }, 00:24:55.073 { 00:24:55.073 "name": "BaseBdev3", 00:24:55.073 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:55.073 "is_configured": true, 00:24:55.073 "data_offset": 2048, 00:24:55.073 "data_size": 63488 00:24:55.073 } 00:24:55.073 ] 00:24:55.073 }' 00:24:55.073 22:32:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:55.073 22:32:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:55.073 22:32:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:55.073 22:32:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:55.073 22:32:54 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:55.330 [2024-07-14 22:32:54.247055] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:55.330 [2024-07-14 22:32:54.247096] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.331 [2024-07-14 22:32:54.257064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:24:55.331 [2024-07-14 22:32:54.262276] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:55.331 22:32:54 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.262 22:32:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.520 22:32:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:56.520 "name": "raid_bdev1", 00:24:56.520 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:56.520 "strip_size_kb": 64, 00:24:56.520 "state": "online", 00:24:56.520 "raid_level": "raid5f", 00:24:56.520 "superblock": true, 00:24:56.520 "num_base_bdevs": 3, 00:24:56.520 "num_base_bdevs_discovered": 3, 00:24:56.520 "num_base_bdevs_operational": 3, 00:24:56.520 "process": { 00:24:56.520 "type": "rebuild", 00:24:56.520 "target": "spare", 00:24:56.520 "progress": { 00:24:56.520 "blocks": 22528, 00:24:56.520 "percent": 17 00:24:56.520 } 00:24:56.520 }, 00:24:56.520 "base_bdevs_list": [ 00:24:56.520 { 00:24:56.520 "name": "spare", 00:24:56.520 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:24:56.520 "is_configured": true, 00:24:56.520 "data_offset": 2048, 00:24:56.520 "data_size": 63488 00:24:56.520 }, 00:24:56.520 { 00:24:56.520 "name": "BaseBdev2", 00:24:56.520 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:56.520 "is_configured": true, 00:24:56.520 "data_offset": 2048, 00:24:56.520 "data_size": 63488 00:24:56.520 }, 00:24:56.520 { 00:24:56.520 "name": "BaseBdev3", 00:24:56.520 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:56.520 "is_configured": true, 00:24:56.520 "data_offset": 2048, 00:24:56.520 "data_size": 63488 00:24:56.520 } 00:24:56.520 ] 00:24:56.520 }' 00:24:56.520 22:32:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:56.520 22:32:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.520 22:32:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:56.778 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@657 -- # local timeout=632 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.778 22:32:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:56.778 "name": "raid_bdev1", 00:24:56.778 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:56.778 "strip_size_kb": 64, 00:24:56.778 "state": "online", 00:24:56.778 "raid_level": "raid5f", 00:24:56.778 "superblock": true, 00:24:56.778 "num_base_bdevs": 3, 00:24:56.778 "num_base_bdevs_discovered": 3, 00:24:56.778 "num_base_bdevs_operational": 3, 00:24:56.778 "process": { 00:24:56.778 "type": "rebuild", 00:24:56.778 "target": "spare", 00:24:56.778 "progress": { 00:24:56.778 "blocks": 30720, 00:24:56.778 "percent": 24 00:24:56.778 } 00:24:56.778 }, 00:24:56.778 "base_bdevs_list": [ 00:24:56.778 { 00:24:56.778 "name": "spare", 00:24:56.778 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:24:56.778 "is_configured": true, 00:24:56.778 "data_offset": 2048, 00:24:56.778 "data_size": 63488 00:24:56.778 }, 00:24:56.778 { 00:24:56.778 "name": "BaseBdev2", 00:24:56.778 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:56.778 "is_configured": true, 00:24:56.778 "data_offset": 2048, 00:24:56.778 "data_size": 63488 00:24:56.778 }, 00:24:56.778 { 00:24:56.778 "name": "BaseBdev3", 00:24:56.778 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:56.778 "is_configured": true, 00:24:56.778 "data_offset": 2048, 00:24:56.778 "data_size": 63488 00:24:56.778 } 00:24:56.778 ] 00:24:56.778 }' 00:24:56.779 22:32:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:57.037 22:32:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.037 22:32:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:57.037 22:32:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.037 22:32:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.987 22:32:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.245 22:32:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:58.245 "name": "raid_bdev1", 00:24:58.245 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:58.245 "strip_size_kb": 64, 00:24:58.245 "state": "online", 00:24:58.245 "raid_level": "raid5f", 00:24:58.245 "superblock": true, 00:24:58.245 "num_base_bdevs": 3, 00:24:58.245 "num_base_bdevs_discovered": 3, 00:24:58.245 "num_base_bdevs_operational": 3, 00:24:58.245 "process": { 00:24:58.245 "type": "rebuild", 00:24:58.245 "target": "spare", 00:24:58.245 "progress": { 00:24:58.245 "blocks": 59392, 00:24:58.245 "percent": 46 00:24:58.245 } 00:24:58.245 }, 00:24:58.245 "base_bdevs_list": [ 00:24:58.245 { 00:24:58.245 "name": "spare", 00:24:58.245 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:24:58.245 "is_configured": true, 00:24:58.245 "data_offset": 2048, 00:24:58.245 "data_size": 63488 00:24:58.245 }, 00:24:58.245 { 00:24:58.245 "name": "BaseBdev2", 00:24:58.245 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:58.245 "is_configured": true, 00:24:58.245 "data_offset": 2048, 00:24:58.245 "data_size": 63488 00:24:58.245 }, 00:24:58.245 { 00:24:58.245 "name": "BaseBdev3", 00:24:58.245 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:58.245 "is_configured": true, 00:24:58.245 "data_offset": 2048, 00:24:58.245 "data_size": 63488 00:24:58.245 } 00:24:58.245 ] 00:24:58.245 }' 00:24:58.245 22:32:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:58.245 22:32:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.245 22:32:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:58.503 22:32:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.503 22:32:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.437 22:32:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.696 22:32:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.696 "name": "raid_bdev1", 00:24:59.696 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:24:59.696 "strip_size_kb": 64, 00:24:59.696 "state": "online", 00:24:59.696 "raid_level": "raid5f", 00:24:59.696 "superblock": true, 00:24:59.696 "num_base_bdevs": 3, 00:24:59.696 "num_base_bdevs_discovered": 3, 00:24:59.696 "num_base_bdevs_operational": 3, 00:24:59.696 "process": { 00:24:59.696 "type": "rebuild", 00:24:59.696 "target": "spare", 00:24:59.696 "progress": { 00:24:59.696 "blocks": 88064, 00:24:59.696 "percent": 69 00:24:59.696 } 00:24:59.696 }, 00:24:59.696 "base_bdevs_list": [ 00:24:59.696 { 00:24:59.696 "name": "spare", 00:24:59.696 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:24:59.696 "is_configured": true, 00:24:59.696 "data_offset": 2048, 00:24:59.696 "data_size": 63488 00:24:59.696 }, 00:24:59.696 { 00:24:59.696 "name": "BaseBdev2", 00:24:59.696 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:24:59.696 "is_configured": true, 00:24:59.696 "data_offset": 2048, 00:24:59.696 "data_size": 63488 00:24:59.696 }, 00:24:59.696 { 00:24:59.696 "name": "BaseBdev3", 00:24:59.696 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:24:59.696 "is_configured": true, 00:24:59.696 "data_offset": 2048, 00:24:59.696 "data_size": 63488 00:24:59.696 } 00:24:59.696 ] 00:24:59.696 }' 00:24:59.696 22:32:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.696 22:32:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.696 22:32:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.696 22:32:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.696 22:32:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.092 "name": "raid_bdev1", 00:25:01.092 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:25:01.092 "strip_size_kb": 64, 00:25:01.092 "state": "online", 00:25:01.092 "raid_level": "raid5f", 00:25:01.092 "superblock": true, 00:25:01.092 "num_base_bdevs": 3, 00:25:01.092 "num_base_bdevs_discovered": 3, 00:25:01.092 "num_base_bdevs_operational": 3, 00:25:01.092 "process": { 00:25:01.092 "type": "rebuild", 00:25:01.092 "target": "spare", 00:25:01.092 "progress": { 00:25:01.092 "blocks": 114688, 00:25:01.092 "percent": 90 00:25:01.092 } 00:25:01.092 }, 00:25:01.092 "base_bdevs_list": [ 00:25:01.092 { 00:25:01.092 "name": "spare", 00:25:01.092 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:25:01.092 "is_configured": true, 00:25:01.092 "data_offset": 2048, 00:25:01.092 "data_size": 63488 00:25:01.092 }, 00:25:01.092 { 00:25:01.092 "name": "BaseBdev2", 00:25:01.092 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:25:01.092 "is_configured": true, 00:25:01.092 "data_offset": 2048, 00:25:01.092 "data_size": 63488 00:25:01.092 }, 00:25:01.092 { 00:25:01.092 "name": "BaseBdev3", 00:25:01.092 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:25:01.092 "is_configured": true, 00:25:01.092 "data_offset": 2048, 00:25:01.092 "data_size": 63488 00:25:01.092 } 00:25:01.092 ] 00:25:01.092 }' 00:25:01.092 22:32:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.092 22:33:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:01.092 22:33:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:01.092 22:33:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:01.092 22:33:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:01.660 [2024-07-14 22:33:00.510007] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:01.660 [2024-07-14 22:33:00.510074] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:01.660 [2024-07-14 22:33:00.510197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.226 22:33:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:02.485 "name": "raid_bdev1", 00:25:02.485 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:25:02.485 "strip_size_kb": 64, 00:25:02.485 "state": "online", 00:25:02.485 "raid_level": "raid5f", 00:25:02.485 "superblock": true, 00:25:02.485 "num_base_bdevs": 3, 00:25:02.485 "num_base_bdevs_discovered": 3, 00:25:02.485 "num_base_bdevs_operational": 3, 00:25:02.485 "base_bdevs_list": [ 00:25:02.485 { 00:25:02.485 "name": "spare", 00:25:02.485 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:25:02.485 "is_configured": true, 00:25:02.485 "data_offset": 2048, 00:25:02.485 "data_size": 63488 00:25:02.485 }, 00:25:02.485 { 00:25:02.485 "name": "BaseBdev2", 00:25:02.485 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:25:02.485 "is_configured": true, 00:25:02.485 "data_offset": 2048, 00:25:02.485 "data_size": 63488 00:25:02.485 }, 00:25:02.485 { 00:25:02.485 "name": "BaseBdev3", 00:25:02.485 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:25:02.485 "is_configured": true, 00:25:02.485 "data_offset": 2048, 00:25:02.485 "data_size": 63488 00:25:02.485 } 00:25:02.485 ] 00:25:02.485 }' 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@660 -- # break 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.485 22:33:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.743 22:33:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:02.743 "name": "raid_bdev1", 00:25:02.743 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:25:02.743 "strip_size_kb": 64, 00:25:02.743 "state": "online", 00:25:02.743 "raid_level": "raid5f", 00:25:02.743 "superblock": true, 00:25:02.743 "num_base_bdevs": 3, 00:25:02.743 "num_base_bdevs_discovered": 3, 00:25:02.743 "num_base_bdevs_operational": 3, 00:25:02.743 "base_bdevs_list": [ 00:25:02.743 { 00:25:02.743 "name": "spare", 00:25:02.743 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:25:02.743 "is_configured": true, 00:25:02.743 "data_offset": 2048, 00:25:02.743 "data_size": 63488 00:25:02.743 }, 00:25:02.743 { 00:25:02.743 "name": "BaseBdev2", 00:25:02.743 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:25:02.743 "is_configured": true, 00:25:02.743 "data_offset": 2048, 00:25:02.743 "data_size": 63488 00:25:02.743 }, 00:25:02.743 { 00:25:02.743 "name": "BaseBdev3", 00:25:02.743 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:25:02.743 "is_configured": true, 00:25:02.743 "data_offset": 2048, 00:25:02.743 "data_size": 63488 00:25:02.743 } 00:25:02.743 ] 00:25:02.743 }' 00:25:02.743 22:33:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.002 22:33:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.260 22:33:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.260 "name": "raid_bdev1", 00:25:03.260 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:25:03.260 "strip_size_kb": 64, 00:25:03.260 "state": "online", 00:25:03.260 "raid_level": "raid5f", 00:25:03.260 "superblock": true, 00:25:03.260 "num_base_bdevs": 3, 00:25:03.260 "num_base_bdevs_discovered": 3, 00:25:03.260 "num_base_bdevs_operational": 3, 00:25:03.260 "base_bdevs_list": [ 00:25:03.260 { 00:25:03.260 "name": "spare", 00:25:03.260 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:25:03.260 "is_configured": true, 00:25:03.260 "data_offset": 2048, 00:25:03.260 "data_size": 63488 00:25:03.260 }, 00:25:03.260 { 00:25:03.260 "name": "BaseBdev2", 00:25:03.260 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:25:03.260 "is_configured": true, 00:25:03.260 "data_offset": 2048, 00:25:03.260 "data_size": 63488 00:25:03.260 }, 00:25:03.260 { 00:25:03.260 "name": "BaseBdev3", 00:25:03.260 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:25:03.260 "is_configured": true, 00:25:03.260 "data_offset": 2048, 00:25:03.260 "data_size": 63488 00:25:03.260 } 00:25:03.260 ] 00:25:03.260 }' 00:25:03.260 22:33:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.260 22:33:02 -- common/autotest_common.sh@10 -- # set +x 00:25:03.827 22:33:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:04.086 [2024-07-14 22:33:03.025605] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:04.086 [2024-07-14 22:33:03.025631] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.086 [2024-07-14 22:33:03.025705] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.086 [2024-07-14 22:33:03.025780] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.086 [2024-07-14 22:33:03.025795] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:25:04.086 22:33:03 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.086 22:33:03 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:04.343 22:33:03 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:04.343 22:33:03 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:04.343 22:33:03 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@12 -- # local i 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:04.343 22:33:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:04.600 /dev/nbd0 00:25:04.600 22:33:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:04.600 22:33:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:04.600 22:33:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:04.600 22:33:03 -- common/autotest_common.sh@857 -- # local i 00:25:04.600 22:33:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:04.600 22:33:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:04.600 22:33:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:04.600 22:33:03 -- common/autotest_common.sh@861 -- # break 00:25:04.600 22:33:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:04.600 22:33:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:04.600 22:33:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:04.600 1+0 records in 00:25:04.600 1+0 records out 00:25:04.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527414 s, 7.8 MB/s 00:25:04.600 22:33:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.600 22:33:03 -- common/autotest_common.sh@874 -- # size=4096 00:25:04.600 22:33:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.600 22:33:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:04.600 22:33:03 -- common/autotest_common.sh@877 -- # return 0 00:25:04.600 22:33:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:04.600 22:33:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:04.600 22:33:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:04.858 /dev/nbd1 00:25:04.858 22:33:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:04.858 22:33:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:04.858 22:33:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:04.858 22:33:03 -- common/autotest_common.sh@857 -- # local i 00:25:04.858 22:33:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:04.858 22:33:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:04.858 22:33:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:04.858 22:33:03 -- common/autotest_common.sh@861 -- # break 00:25:04.858 22:33:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:04.858 22:33:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:04.858 22:33:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:04.858 1+0 records in 00:25:04.858 1+0 records out 00:25:04.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047038 s, 8.7 MB/s 00:25:04.858 22:33:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.858 22:33:03 -- common/autotest_common.sh@874 -- # size=4096 00:25:04.858 22:33:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.858 22:33:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:04.858 22:33:03 -- common/autotest_common.sh@877 -- # return 0 00:25:04.858 22:33:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:04.858 22:33:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:04.858 22:33:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:05.115 22:33:04 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:05.115 22:33:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:05.116 22:33:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:05.116 22:33:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:05.116 22:33:04 -- bdev/nbd_common.sh@51 -- # local i 00:25:05.116 22:33:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:05.116 22:33:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@41 -- # break 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@45 -- # return 0 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:05.374 22:33:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@41 -- # break 00:25:05.633 22:33:04 -- bdev/nbd_common.sh@45 -- # return 0 00:25:05.633 22:33:04 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:05.633 22:33:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:05.633 22:33:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:05.633 22:33:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:06.201 22:33:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:06.201 [2024-07-14 22:33:05.210196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:06.201 [2024-07-14 22:33:05.210276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.201 [2024-07-14 22:33:05.210308] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:06.201 [2024-07-14 22:33:05.210333] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.201 [2024-07-14 22:33:05.212458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.201 [2024-07-14 22:33:05.212520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:06.201 [2024-07-14 22:33:05.212610] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:06.201 [2024-07-14 22:33:05.212669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.201 BaseBdev1 00:25:06.201 22:33:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:06.201 22:33:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:06.201 22:33:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:06.460 22:33:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:06.719 [2024-07-14 22:33:05.654258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:06.719 [2024-07-14 22:33:05.654310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.719 [2024-07-14 22:33:05.654349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:06.719 [2024-07-14 22:33:05.654367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.719 [2024-07-14 22:33:05.654733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.719 [2024-07-14 22:33:05.654783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:06.719 [2024-07-14 22:33:05.654863] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:06.719 [2024-07-14 22:33:05.654876] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:06.719 [2024-07-14 22:33:05.654882] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:06.719 [2024-07-14 22:33:05.654903] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:25:06.719 [2024-07-14 22:33:05.654954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:06.719 BaseBdev2 00:25:06.719 22:33:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:06.719 22:33:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:06.719 22:33:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:06.977 22:33:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:07.236 [2024-07-14 22:33:06.126352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:07.236 [2024-07-14 22:33:06.126411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.236 [2024-07-14 22:33:06.126445] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:07.236 [2024-07-14 22:33:06.126466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.236 [2024-07-14 22:33:06.126836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.236 [2024-07-14 22:33:06.126889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:07.236 [2024-07-14 22:33:06.126967] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:07.236 [2024-07-14 22:33:06.126989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.236 BaseBdev3 00:25:07.236 22:33:06 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:07.494 [2024-07-14 22:33:06.514436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:07.494 [2024-07-14 22:33:06.514506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.494 [2024-07-14 22:33:06.514541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:07.494 [2024-07-14 22:33:06.514567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.494 [2024-07-14 22:33:06.515041] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.494 [2024-07-14 22:33:06.515101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:07.494 [2024-07-14 22:33:06.515184] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:07.494 [2024-07-14 22:33:06.515216] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:07.494 spare 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.494 22:33:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.753 [2024-07-14 22:33:06.615310] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:25:07.753 [2024-07-14 22:33:06.615331] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:07.753 [2024-07-14 22:33:06.615443] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:25:07.753 [2024-07-14 22:33:06.619373] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:25:07.753 [2024-07-14 22:33:06.619397] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:25:07.753 [2024-07-14 22:33:06.619534] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.753 22:33:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.753 "name": "raid_bdev1", 00:25:07.753 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:25:07.753 "strip_size_kb": 64, 00:25:07.753 "state": "online", 00:25:07.753 "raid_level": "raid5f", 00:25:07.753 "superblock": true, 00:25:07.753 "num_base_bdevs": 3, 00:25:07.753 "num_base_bdevs_discovered": 3, 00:25:07.753 "num_base_bdevs_operational": 3, 00:25:07.753 "base_bdevs_list": [ 00:25:07.753 { 00:25:07.753 "name": "spare", 00:25:07.753 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:25:07.753 "is_configured": true, 00:25:07.753 "data_offset": 2048, 00:25:07.753 "data_size": 63488 00:25:07.753 }, 00:25:07.753 { 00:25:07.753 "name": "BaseBdev2", 00:25:07.753 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:25:07.753 "is_configured": true, 00:25:07.753 "data_offset": 2048, 00:25:07.753 "data_size": 63488 00:25:07.753 }, 00:25:07.753 { 00:25:07.753 "name": "BaseBdev3", 00:25:07.753 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:25:07.753 "is_configured": true, 00:25:07.753 "data_offset": 2048, 00:25:07.753 "data_size": 63488 00:25:07.753 } 00:25:07.753 ] 00:25:07.753 }' 00:25:07.753 22:33:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.753 22:33:06 -- common/autotest_common.sh@10 -- # set +x 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:08.689 "name": "raid_bdev1", 00:25:08.689 "uuid": "29579202-a30d-4217-9e1c-3383ce94665c", 00:25:08.689 "strip_size_kb": 64, 00:25:08.689 "state": "online", 00:25:08.689 "raid_level": "raid5f", 00:25:08.689 "superblock": true, 00:25:08.689 "num_base_bdevs": 3, 00:25:08.689 "num_base_bdevs_discovered": 3, 00:25:08.689 "num_base_bdevs_operational": 3, 00:25:08.689 "base_bdevs_list": [ 00:25:08.689 { 00:25:08.689 "name": "spare", 00:25:08.689 "uuid": "4a534cef-5728-568e-9d54-18a8e486f07a", 00:25:08.689 "is_configured": true, 00:25:08.689 "data_offset": 2048, 00:25:08.689 "data_size": 63488 00:25:08.689 }, 00:25:08.689 { 00:25:08.689 "name": "BaseBdev2", 00:25:08.689 "uuid": "0e1d8fcd-cf70-507a-883a-b63975efb956", 00:25:08.689 "is_configured": true, 00:25:08.689 "data_offset": 2048, 00:25:08.689 "data_size": 63488 00:25:08.689 }, 00:25:08.689 { 00:25:08.689 "name": "BaseBdev3", 00:25:08.689 "uuid": "86f2482d-032b-5774-8cc2-38436bd1f685", 00:25:08.689 "is_configured": true, 00:25:08.689 "data_offset": 2048, 00:25:08.689 "data_size": 63488 00:25:08.689 } 00:25:08.689 ] 00:25:08.689 }' 00:25:08.689 22:33:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:08.948 22:33:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:08.948 22:33:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:08.948 22:33:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:08.948 22:33:07 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:08.948 22:33:07 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.206 22:33:08 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:09.206 22:33:08 -- bdev/bdev_raid.sh@709 -- # killprocess 132456 00:25:09.206 22:33:08 -- common/autotest_common.sh@926 -- # '[' -z 132456 ']' 00:25:09.206 22:33:08 -- common/autotest_common.sh@930 -- # kill -0 132456 00:25:09.206 22:33:08 -- common/autotest_common.sh@931 -- # uname 00:25:09.206 22:33:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:09.206 22:33:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132456 00:25:09.206 killing process with pid 132456 00:25:09.206 Received shutdown signal, test time was about 60.000000 seconds 00:25:09.206 00:25:09.206 Latency(us) 00:25:09.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.206 =================================================================================================================== 00:25:09.206 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:09.206 22:33:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:09.206 22:33:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:09.206 22:33:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132456' 00:25:09.206 22:33:08 -- common/autotest_common.sh@945 -- # kill 132456 00:25:09.206 22:33:08 -- common/autotest_common.sh@950 -- # wait 132456 00:25:09.206 [2024-07-14 22:33:08.060484] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:09.206 [2024-07-14 22:33:08.060536] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.206 [2024-07-14 22:33:08.060604] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:09.206 [2024-07-14 22:33:08.060614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:25:09.464 [2024-07-14 22:33:08.309470] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:10.400 ************************************ 00:25:10.400 END TEST raid5f_rebuild_test_sb 00:25:10.400 ************************************ 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:10.400 00:25:10.400 real 0m24.740s 00:25:10.400 user 0m39.229s 00:25:10.400 sys 0m2.690s 00:25:10.400 22:33:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.400 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:25:10.400 22:33:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:10.400 22:33:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:10.400 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.400 ************************************ 00:25:10.400 START TEST raid5f_state_function_test 00:25:10.400 ************************************ 00:25:10.400 22:33:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=133161 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133161' 00:25:10.400 Process raid pid: 133161 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133161 /var/tmp/spdk-raid.sock 00:25:10.400 22:33:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:10.400 22:33:09 -- common/autotest_common.sh@819 -- # '[' -z 133161 ']' 00:25:10.400 22:33:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:10.400 22:33:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:10.400 22:33:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:10.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:10.400 22:33:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:10.400 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.400 [2024-07-14 22:33:09.354936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:10.400 [2024-07-14 22:33:09.355121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.658 [2024-07-14 22:33:09.527345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.658 [2024-07-14 22:33:09.681810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.917 [2024-07-14 22:33:09.846962] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.484 22:33:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:11.484 22:33:10 -- common/autotest_common.sh@852 -- # return 0 00:25:11.484 22:33:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:11.484 [2024-07-14 22:33:10.548917] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:11.484 [2024-07-14 22:33:10.548987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:11.484 [2024-07-14 22:33:10.548999] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.484 [2024-07-14 22:33:10.549019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.484 [2024-07-14 22:33:10.549025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:11.484 [2024-07-14 22:33:10.549059] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:11.484 [2024-07-14 22:33:10.549067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:11.484 [2024-07-14 22:33:10.549086] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.743 22:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.002 22:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.002 "name": "Existed_Raid", 00:25:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.002 "strip_size_kb": 64, 00:25:12.002 "state": "configuring", 00:25:12.002 "raid_level": "raid5f", 00:25:12.002 "superblock": false, 00:25:12.002 "num_base_bdevs": 4, 00:25:12.002 "num_base_bdevs_discovered": 0, 00:25:12.002 "num_base_bdevs_operational": 4, 00:25:12.002 "base_bdevs_list": [ 00:25:12.002 { 00:25:12.002 "name": "BaseBdev1", 00:25:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.002 "is_configured": false, 00:25:12.002 "data_offset": 0, 00:25:12.002 "data_size": 0 00:25:12.002 }, 00:25:12.002 { 00:25:12.002 "name": "BaseBdev2", 00:25:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.002 "is_configured": false, 00:25:12.002 "data_offset": 0, 00:25:12.002 "data_size": 0 00:25:12.002 }, 00:25:12.002 { 00:25:12.002 "name": "BaseBdev3", 00:25:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.002 "is_configured": false, 00:25:12.002 "data_offset": 0, 00:25:12.002 "data_size": 0 00:25:12.002 }, 00:25:12.002 { 00:25:12.002 "name": "BaseBdev4", 00:25:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.002 "is_configured": false, 00:25:12.002 "data_offset": 0, 00:25:12.002 "data_size": 0 00:25:12.002 } 00:25:12.002 ] 00:25:12.002 }' 00:25:12.002 22:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.002 22:33:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.567 22:33:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:12.825 [2024-07-14 22:33:11.745007] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:12.825 [2024-07-14 22:33:11.745040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:12.825 22:33:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:13.083 [2024-07-14 22:33:12.001077] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:13.083 [2024-07-14 22:33:12.001122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:13.083 [2024-07-14 22:33:12.001131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:13.083 [2024-07-14 22:33:12.001158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:13.083 [2024-07-14 22:33:12.001165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:13.083 [2024-07-14 22:33:12.001195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:13.083 [2024-07-14 22:33:12.001202] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:13.083 [2024-07-14 22:33:12.001221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:13.083 22:33:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:13.341 [2024-07-14 22:33:12.286362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.341 BaseBdev1 00:25:13.341 22:33:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:13.341 22:33:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:13.341 22:33:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:13.341 22:33:12 -- common/autotest_common.sh@889 -- # local i 00:25:13.341 22:33:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:13.341 22:33:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:13.341 22:33:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:13.599 22:33:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:13.599 [ 00:25:13.599 { 00:25:13.600 "name": "BaseBdev1", 00:25:13.600 "aliases": [ 00:25:13.600 "b10d6df6-d600-44e1-84c6-e198dab49a30" 00:25:13.600 ], 00:25:13.600 "product_name": "Malloc disk", 00:25:13.600 "block_size": 512, 00:25:13.600 "num_blocks": 65536, 00:25:13.600 "uuid": "b10d6df6-d600-44e1-84c6-e198dab49a30", 00:25:13.600 "assigned_rate_limits": { 00:25:13.600 "rw_ios_per_sec": 0, 00:25:13.600 "rw_mbytes_per_sec": 0, 00:25:13.600 "r_mbytes_per_sec": 0, 00:25:13.600 "w_mbytes_per_sec": 0 00:25:13.600 }, 00:25:13.600 "claimed": true, 00:25:13.600 "claim_type": "exclusive_write", 00:25:13.600 "zoned": false, 00:25:13.600 "supported_io_types": { 00:25:13.600 "read": true, 00:25:13.600 "write": true, 00:25:13.600 "unmap": true, 00:25:13.600 "write_zeroes": true, 00:25:13.600 "flush": true, 00:25:13.600 "reset": true, 00:25:13.600 "compare": false, 00:25:13.600 "compare_and_write": false, 00:25:13.600 "abort": true, 00:25:13.600 "nvme_admin": false, 00:25:13.600 "nvme_io": false 00:25:13.600 }, 00:25:13.600 "memory_domains": [ 00:25:13.600 { 00:25:13.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.600 "dma_device_type": 2 00:25:13.600 } 00:25:13.600 ], 00:25:13.600 "driver_specific": {} 00:25:13.600 } 00:25:13.600 ] 00:25:13.858 22:33:12 -- common/autotest_common.sh@895 -- # return 0 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.858 "name": "Existed_Raid", 00:25:13.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.858 "strip_size_kb": 64, 00:25:13.858 "state": "configuring", 00:25:13.858 "raid_level": "raid5f", 00:25:13.858 "superblock": false, 00:25:13.858 "num_base_bdevs": 4, 00:25:13.858 "num_base_bdevs_discovered": 1, 00:25:13.858 "num_base_bdevs_operational": 4, 00:25:13.858 "base_bdevs_list": [ 00:25:13.858 { 00:25:13.858 "name": "BaseBdev1", 00:25:13.858 "uuid": "b10d6df6-d600-44e1-84c6-e198dab49a30", 00:25:13.858 "is_configured": true, 00:25:13.858 "data_offset": 0, 00:25:13.858 "data_size": 65536 00:25:13.858 }, 00:25:13.858 { 00:25:13.858 "name": "BaseBdev2", 00:25:13.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.858 "is_configured": false, 00:25:13.858 "data_offset": 0, 00:25:13.858 "data_size": 0 00:25:13.858 }, 00:25:13.858 { 00:25:13.858 "name": "BaseBdev3", 00:25:13.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.858 "is_configured": false, 00:25:13.858 "data_offset": 0, 00:25:13.858 "data_size": 0 00:25:13.858 }, 00:25:13.858 { 00:25:13.858 "name": "BaseBdev4", 00:25:13.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.858 "is_configured": false, 00:25:13.858 "data_offset": 0, 00:25:13.858 "data_size": 0 00:25:13.858 } 00:25:13.858 ] 00:25:13.858 }' 00:25:13.858 22:33:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.858 22:33:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.792 22:33:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:14.792 [2024-07-14 22:33:13.762696] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:14.792 [2024-07-14 22:33:13.762853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:14.792 22:33:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:14.792 22:33:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:15.051 [2024-07-14 22:33:13.962778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.051 [2024-07-14 22:33:13.964675] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:15.051 [2024-07-14 22:33:13.964877] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:15.051 [2024-07-14 22:33:13.964994] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:15.051 [2024-07-14 22:33:13.965050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:15.051 [2024-07-14 22:33:13.965144] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:15.051 [2024-07-14 22:33:13.965194] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.051 22:33:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.310 22:33:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:15.310 "name": "Existed_Raid", 00:25:15.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.310 "strip_size_kb": 64, 00:25:15.310 "state": "configuring", 00:25:15.310 "raid_level": "raid5f", 00:25:15.310 "superblock": false, 00:25:15.310 "num_base_bdevs": 4, 00:25:15.310 "num_base_bdevs_discovered": 1, 00:25:15.310 "num_base_bdevs_operational": 4, 00:25:15.310 "base_bdevs_list": [ 00:25:15.310 { 00:25:15.310 "name": "BaseBdev1", 00:25:15.310 "uuid": "b10d6df6-d600-44e1-84c6-e198dab49a30", 00:25:15.310 "is_configured": true, 00:25:15.310 "data_offset": 0, 00:25:15.310 "data_size": 65536 00:25:15.310 }, 00:25:15.310 { 00:25:15.310 "name": "BaseBdev2", 00:25:15.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.310 "is_configured": false, 00:25:15.310 "data_offset": 0, 00:25:15.310 "data_size": 0 00:25:15.310 }, 00:25:15.310 { 00:25:15.310 "name": "BaseBdev3", 00:25:15.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.310 "is_configured": false, 00:25:15.310 "data_offset": 0, 00:25:15.310 "data_size": 0 00:25:15.310 }, 00:25:15.310 { 00:25:15.310 "name": "BaseBdev4", 00:25:15.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.310 "is_configured": false, 00:25:15.310 "data_offset": 0, 00:25:15.310 "data_size": 0 00:25:15.310 } 00:25:15.310 ] 00:25:15.310 }' 00:25:15.310 22:33:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:15.310 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:25:15.899 22:33:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:16.165 [2024-07-14 22:33:15.012478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:16.165 BaseBdev2 00:25:16.165 22:33:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:16.165 22:33:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:16.165 22:33:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:16.165 22:33:15 -- common/autotest_common.sh@889 -- # local i 00:25:16.165 22:33:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:16.165 22:33:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:16.165 22:33:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:16.423 22:33:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:16.423 [ 00:25:16.423 { 00:25:16.423 "name": "BaseBdev2", 00:25:16.423 "aliases": [ 00:25:16.423 "083b6164-1b99-4945-9c6e-dca7f26242ab" 00:25:16.423 ], 00:25:16.423 "product_name": "Malloc disk", 00:25:16.423 "block_size": 512, 00:25:16.423 "num_blocks": 65536, 00:25:16.423 "uuid": "083b6164-1b99-4945-9c6e-dca7f26242ab", 00:25:16.423 "assigned_rate_limits": { 00:25:16.423 "rw_ios_per_sec": 0, 00:25:16.423 "rw_mbytes_per_sec": 0, 00:25:16.423 "r_mbytes_per_sec": 0, 00:25:16.423 "w_mbytes_per_sec": 0 00:25:16.423 }, 00:25:16.423 "claimed": true, 00:25:16.423 "claim_type": "exclusive_write", 00:25:16.423 "zoned": false, 00:25:16.423 "supported_io_types": { 00:25:16.423 "read": true, 00:25:16.423 "write": true, 00:25:16.423 "unmap": true, 00:25:16.423 "write_zeroes": true, 00:25:16.423 "flush": true, 00:25:16.423 "reset": true, 00:25:16.423 "compare": false, 00:25:16.423 "compare_and_write": false, 00:25:16.423 "abort": true, 00:25:16.423 "nvme_admin": false, 00:25:16.423 "nvme_io": false 00:25:16.423 }, 00:25:16.423 "memory_domains": [ 00:25:16.423 { 00:25:16.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.423 "dma_device_type": 2 00:25:16.423 } 00:25:16.423 ], 00:25:16.423 "driver_specific": {} 00:25:16.423 } 00:25:16.423 ] 00:25:16.423 22:33:15 -- common/autotest_common.sh@895 -- # return 0 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.423 22:33:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.682 22:33:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.682 "name": "Existed_Raid", 00:25:16.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.682 "strip_size_kb": 64, 00:25:16.682 "state": "configuring", 00:25:16.682 "raid_level": "raid5f", 00:25:16.682 "superblock": false, 00:25:16.682 "num_base_bdevs": 4, 00:25:16.682 "num_base_bdevs_discovered": 2, 00:25:16.682 "num_base_bdevs_operational": 4, 00:25:16.682 "base_bdevs_list": [ 00:25:16.682 { 00:25:16.682 "name": "BaseBdev1", 00:25:16.682 "uuid": "b10d6df6-d600-44e1-84c6-e198dab49a30", 00:25:16.682 "is_configured": true, 00:25:16.682 "data_offset": 0, 00:25:16.682 "data_size": 65536 00:25:16.682 }, 00:25:16.682 { 00:25:16.682 "name": "BaseBdev2", 00:25:16.682 "uuid": "083b6164-1b99-4945-9c6e-dca7f26242ab", 00:25:16.682 "is_configured": true, 00:25:16.682 "data_offset": 0, 00:25:16.682 "data_size": 65536 00:25:16.682 }, 00:25:16.682 { 00:25:16.682 "name": "BaseBdev3", 00:25:16.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.682 "is_configured": false, 00:25:16.682 "data_offset": 0, 00:25:16.682 "data_size": 0 00:25:16.682 }, 00:25:16.682 { 00:25:16.682 "name": "BaseBdev4", 00:25:16.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.682 "is_configured": false, 00:25:16.682 "data_offset": 0, 00:25:16.682 "data_size": 0 00:25:16.682 } 00:25:16.682 ] 00:25:16.682 }' 00:25:16.682 22:33:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.682 22:33:15 -- common/autotest_common.sh@10 -- # set +x 00:25:17.617 22:33:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:17.618 [2024-07-14 22:33:16.612348] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:17.618 BaseBdev3 00:25:17.618 22:33:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:17.618 22:33:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:17.618 22:33:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:17.618 22:33:16 -- common/autotest_common.sh@889 -- # local i 00:25:17.618 22:33:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:17.618 22:33:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:17.618 22:33:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.876 22:33:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:18.134 [ 00:25:18.134 { 00:25:18.134 "name": "BaseBdev3", 00:25:18.134 "aliases": [ 00:25:18.134 "f9536744-24aa-46cf-92ae-9e6dd68dc8c8" 00:25:18.134 ], 00:25:18.134 "product_name": "Malloc disk", 00:25:18.134 "block_size": 512, 00:25:18.134 "num_blocks": 65536, 00:25:18.134 "uuid": "f9536744-24aa-46cf-92ae-9e6dd68dc8c8", 00:25:18.134 "assigned_rate_limits": { 00:25:18.134 "rw_ios_per_sec": 0, 00:25:18.134 "rw_mbytes_per_sec": 0, 00:25:18.134 "r_mbytes_per_sec": 0, 00:25:18.134 "w_mbytes_per_sec": 0 00:25:18.134 }, 00:25:18.134 "claimed": true, 00:25:18.134 "claim_type": "exclusive_write", 00:25:18.134 "zoned": false, 00:25:18.134 "supported_io_types": { 00:25:18.134 "read": true, 00:25:18.134 "write": true, 00:25:18.134 "unmap": true, 00:25:18.134 "write_zeroes": true, 00:25:18.134 "flush": true, 00:25:18.134 "reset": true, 00:25:18.134 "compare": false, 00:25:18.134 "compare_and_write": false, 00:25:18.134 "abort": true, 00:25:18.134 "nvme_admin": false, 00:25:18.134 "nvme_io": false 00:25:18.134 }, 00:25:18.134 "memory_domains": [ 00:25:18.134 { 00:25:18.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.134 "dma_device_type": 2 00:25:18.134 } 00:25:18.134 ], 00:25:18.134 "driver_specific": {} 00:25:18.134 } 00:25:18.134 ] 00:25:18.134 22:33:17 -- common/autotest_common.sh@895 -- # return 0 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.134 22:33:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.392 22:33:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.392 "name": "Existed_Raid", 00:25:18.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.392 "strip_size_kb": 64, 00:25:18.392 "state": "configuring", 00:25:18.392 "raid_level": "raid5f", 00:25:18.392 "superblock": false, 00:25:18.392 "num_base_bdevs": 4, 00:25:18.392 "num_base_bdevs_discovered": 3, 00:25:18.392 "num_base_bdevs_operational": 4, 00:25:18.392 "base_bdevs_list": [ 00:25:18.392 { 00:25:18.392 "name": "BaseBdev1", 00:25:18.392 "uuid": "b10d6df6-d600-44e1-84c6-e198dab49a30", 00:25:18.392 "is_configured": true, 00:25:18.392 "data_offset": 0, 00:25:18.392 "data_size": 65536 00:25:18.392 }, 00:25:18.392 { 00:25:18.392 "name": "BaseBdev2", 00:25:18.392 "uuid": "083b6164-1b99-4945-9c6e-dca7f26242ab", 00:25:18.392 "is_configured": true, 00:25:18.392 "data_offset": 0, 00:25:18.392 "data_size": 65536 00:25:18.392 }, 00:25:18.392 { 00:25:18.392 "name": "BaseBdev3", 00:25:18.392 "uuid": "f9536744-24aa-46cf-92ae-9e6dd68dc8c8", 00:25:18.392 "is_configured": true, 00:25:18.392 "data_offset": 0, 00:25:18.392 "data_size": 65536 00:25:18.392 }, 00:25:18.392 { 00:25:18.392 "name": "BaseBdev4", 00:25:18.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.392 "is_configured": false, 00:25:18.392 "data_offset": 0, 00:25:18.392 "data_size": 0 00:25:18.392 } 00:25:18.392 ] 00:25:18.392 }' 00:25:18.392 22:33:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.392 22:33:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.325 22:33:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:19.325 [2024-07-14 22:33:18.360043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:19.325 [2024-07-14 22:33:18.360272] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:19.325 [2024-07-14 22:33:18.360312] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:19.325 [2024-07-14 22:33:18.360519] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:19.325 [2024-07-14 22:33:18.366062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:19.325 [2024-07-14 22:33:18.366206] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:19.325 [2024-07-14 22:33:18.366631] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.325 BaseBdev4 00:25:19.325 22:33:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:19.325 22:33:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:25:19.325 22:33:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:19.325 22:33:18 -- common/autotest_common.sh@889 -- # local i 00:25:19.325 22:33:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:19.325 22:33:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:19.325 22:33:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.582 22:33:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:19.840 [ 00:25:19.840 { 00:25:19.840 "name": "BaseBdev4", 00:25:19.840 "aliases": [ 00:25:19.840 "a0b0829d-bfae-4dc6-aa04-41b79f968c1b" 00:25:19.840 ], 00:25:19.840 "product_name": "Malloc disk", 00:25:19.840 "block_size": 512, 00:25:19.840 "num_blocks": 65536, 00:25:19.840 "uuid": "a0b0829d-bfae-4dc6-aa04-41b79f968c1b", 00:25:19.840 "assigned_rate_limits": { 00:25:19.840 "rw_ios_per_sec": 0, 00:25:19.840 "rw_mbytes_per_sec": 0, 00:25:19.840 "r_mbytes_per_sec": 0, 00:25:19.840 "w_mbytes_per_sec": 0 00:25:19.840 }, 00:25:19.840 "claimed": true, 00:25:19.840 "claim_type": "exclusive_write", 00:25:19.840 "zoned": false, 00:25:19.840 "supported_io_types": { 00:25:19.840 "read": true, 00:25:19.840 "write": true, 00:25:19.840 "unmap": true, 00:25:19.840 "write_zeroes": true, 00:25:19.840 "flush": true, 00:25:19.840 "reset": true, 00:25:19.840 "compare": false, 00:25:19.840 "compare_and_write": false, 00:25:19.840 "abort": true, 00:25:19.840 "nvme_admin": false, 00:25:19.840 "nvme_io": false 00:25:19.840 }, 00:25:19.840 "memory_domains": [ 00:25:19.840 { 00:25:19.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.840 "dma_device_type": 2 00:25:19.840 } 00:25:19.840 ], 00:25:19.840 "driver_specific": {} 00:25:19.840 } 00:25:19.840 ] 00:25:19.840 22:33:18 -- common/autotest_common.sh@895 -- # return 0 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:19.840 22:33:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.841 22:33:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.099 22:33:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:20.099 "name": "Existed_Raid", 00:25:20.099 "uuid": "de3ea28c-7b5f-4204-83ab-9ddcb563ceba", 00:25:20.099 "strip_size_kb": 64, 00:25:20.099 "state": "online", 00:25:20.099 "raid_level": "raid5f", 00:25:20.099 "superblock": false, 00:25:20.099 "num_base_bdevs": 4, 00:25:20.099 "num_base_bdevs_discovered": 4, 00:25:20.099 "num_base_bdevs_operational": 4, 00:25:20.099 "base_bdevs_list": [ 00:25:20.099 { 00:25:20.099 "name": "BaseBdev1", 00:25:20.099 "uuid": "b10d6df6-d600-44e1-84c6-e198dab49a30", 00:25:20.099 "is_configured": true, 00:25:20.099 "data_offset": 0, 00:25:20.099 "data_size": 65536 00:25:20.099 }, 00:25:20.099 { 00:25:20.099 "name": "BaseBdev2", 00:25:20.099 "uuid": "083b6164-1b99-4945-9c6e-dca7f26242ab", 00:25:20.099 "is_configured": true, 00:25:20.099 "data_offset": 0, 00:25:20.099 "data_size": 65536 00:25:20.099 }, 00:25:20.099 { 00:25:20.099 "name": "BaseBdev3", 00:25:20.099 "uuid": "f9536744-24aa-46cf-92ae-9e6dd68dc8c8", 00:25:20.099 "is_configured": true, 00:25:20.099 "data_offset": 0, 00:25:20.099 "data_size": 65536 00:25:20.099 }, 00:25:20.099 { 00:25:20.099 "name": "BaseBdev4", 00:25:20.099 "uuid": "a0b0829d-bfae-4dc6-aa04-41b79f968c1b", 00:25:20.099 "is_configured": true, 00:25:20.099 "data_offset": 0, 00:25:20.099 "data_size": 65536 00:25:20.099 } 00:25:20.099 ] 00:25:20.099 }' 00:25:20.099 22:33:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:20.099 22:33:18 -- common/autotest_common.sh@10 -- # set +x 00:25:20.666 22:33:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:20.925 [2024-07-14 22:33:19.824892] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.925 22:33:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.184 22:33:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.184 "name": "Existed_Raid", 00:25:21.184 "uuid": "de3ea28c-7b5f-4204-83ab-9ddcb563ceba", 00:25:21.184 "strip_size_kb": 64, 00:25:21.184 "state": "online", 00:25:21.184 "raid_level": "raid5f", 00:25:21.184 "superblock": false, 00:25:21.184 "num_base_bdevs": 4, 00:25:21.184 "num_base_bdevs_discovered": 3, 00:25:21.184 "num_base_bdevs_operational": 3, 00:25:21.184 "base_bdevs_list": [ 00:25:21.184 { 00:25:21.184 "name": null, 00:25:21.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.184 "is_configured": false, 00:25:21.184 "data_offset": 0, 00:25:21.184 "data_size": 65536 00:25:21.184 }, 00:25:21.184 { 00:25:21.184 "name": "BaseBdev2", 00:25:21.184 "uuid": "083b6164-1b99-4945-9c6e-dca7f26242ab", 00:25:21.184 "is_configured": true, 00:25:21.184 "data_offset": 0, 00:25:21.184 "data_size": 65536 00:25:21.184 }, 00:25:21.184 { 00:25:21.184 "name": "BaseBdev3", 00:25:21.184 "uuid": "f9536744-24aa-46cf-92ae-9e6dd68dc8c8", 00:25:21.184 "is_configured": true, 00:25:21.184 "data_offset": 0, 00:25:21.184 "data_size": 65536 00:25:21.184 }, 00:25:21.184 { 00:25:21.184 "name": "BaseBdev4", 00:25:21.184 "uuid": "a0b0829d-bfae-4dc6-aa04-41b79f968c1b", 00:25:21.184 "is_configured": true, 00:25:21.184 "data_offset": 0, 00:25:21.184 "data_size": 65536 00:25:21.184 } 00:25:21.184 ] 00:25:21.184 }' 00:25:21.184 22:33:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.184 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:25:22.119 22:33:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:22.119 22:33:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:22.119 22:33:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.119 22:33:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:22.119 22:33:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:22.119 22:33:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:22.119 22:33:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:22.377 [2024-07-14 22:33:21.336517] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:22.377 [2024-07-14 22:33:21.336723] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.377 [2024-07-14 22:33:21.336874] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.377 22:33:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:22.377 22:33:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:22.377 22:33:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.377 22:33:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:22.635 22:33:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:22.635 22:33:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:22.635 22:33:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:22.894 [2024-07-14 22:33:21.853160] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:22.894 22:33:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:22.894 22:33:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:22.894 22:33:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.894 22:33:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:23.153 22:33:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:23.153 22:33:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:23.153 22:33:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:23.411 [2024-07-14 22:33:22.367873] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:23.411 [2024-07-14 22:33:22.368090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:25:23.411 22:33:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:23.411 22:33:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:23.411 22:33:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.411 22:33:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:23.670 22:33:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:23.670 22:33:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:23.670 22:33:22 -- bdev/bdev_raid.sh@287 -- # killprocess 133161 00:25:23.670 22:33:22 -- common/autotest_common.sh@926 -- # '[' -z 133161 ']' 00:25:23.670 22:33:22 -- common/autotest_common.sh@930 -- # kill -0 133161 00:25:23.670 22:33:22 -- common/autotest_common.sh@931 -- # uname 00:25:23.670 22:33:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.670 22:33:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133161 00:25:23.928 killing process with pid 133161 00:25:23.928 22:33:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:23.928 22:33:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:23.928 22:33:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133161' 00:25:23.928 22:33:22 -- common/autotest_common.sh@945 -- # kill 133161 00:25:23.928 22:33:22 -- common/autotest_common.sh@950 -- # wait 133161 00:25:23.928 [2024-07-14 22:33:22.741208] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.928 [2024-07-14 22:33:22.741301] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.861 ************************************ 00:25:24.861 END TEST raid5f_state_function_test 00:25:24.861 ************************************ 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:24.861 00:25:24.861 real 0m14.386s 00:25:24.861 user 0m25.896s 00:25:24.861 sys 0m1.777s 00:25:24.861 22:33:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.861 22:33:23 -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:25:24.861 22:33:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:24.861 22:33:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.861 22:33:23 -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 ************************************ 00:25:24.861 START TEST raid5f_state_function_test_sb 00:25:24.861 ************************************ 00:25:24.861 22:33:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=133610 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133610' 00:25:24.861 Process raid pid: 133610 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:24.861 22:33:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133610 /var/tmp/spdk-raid.sock 00:25:24.861 22:33:23 -- common/autotest_common.sh@819 -- # '[' -z 133610 ']' 00:25:24.861 22:33:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:24.861 22:33:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:24.861 22:33:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:24.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:24.861 22:33:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:24.861 22:33:23 -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 [2024-07-14 22:33:23.807845] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:24.861 [2024-07-14 22:33:23.808256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.118 [2024-07-14 22:33:23.985337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.118 [2024-07-14 22:33:24.141780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.376 [2024-07-14 22:33:24.306966] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.943 22:33:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:25.943 22:33:24 -- common/autotest_common.sh@852 -- # return 0 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:25.943 [2024-07-14 22:33:24.915425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:25.943 [2024-07-14 22:33:24.915599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:25.943 [2024-07-14 22:33:24.915697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:25.943 [2024-07-14 22:33:24.915751] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:25.943 [2024-07-14 22:33:24.915835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:25.943 [2024-07-14 22:33:24.915962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:25.943 [2024-07-14 22:33:24.915999] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:25.943 [2024-07-14 22:33:24.916039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.943 22:33:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.202 22:33:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:26.202 "name": "Existed_Raid", 00:25:26.202 "uuid": "bd00e6ae-771f-4890-8f53-90f031d86601", 00:25:26.202 "strip_size_kb": 64, 00:25:26.202 "state": "configuring", 00:25:26.202 "raid_level": "raid5f", 00:25:26.202 "superblock": true, 00:25:26.202 "num_base_bdevs": 4, 00:25:26.202 "num_base_bdevs_discovered": 0, 00:25:26.202 "num_base_bdevs_operational": 4, 00:25:26.202 "base_bdevs_list": [ 00:25:26.202 { 00:25:26.202 "name": "BaseBdev1", 00:25:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.202 "is_configured": false, 00:25:26.202 "data_offset": 0, 00:25:26.202 "data_size": 0 00:25:26.202 }, 00:25:26.202 { 00:25:26.202 "name": "BaseBdev2", 00:25:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.202 "is_configured": false, 00:25:26.202 "data_offset": 0, 00:25:26.202 "data_size": 0 00:25:26.202 }, 00:25:26.202 { 00:25:26.202 "name": "BaseBdev3", 00:25:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.202 "is_configured": false, 00:25:26.202 "data_offset": 0, 00:25:26.202 "data_size": 0 00:25:26.202 }, 00:25:26.202 { 00:25:26.202 "name": "BaseBdev4", 00:25:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.202 "is_configured": false, 00:25:26.202 "data_offset": 0, 00:25:26.202 "data_size": 0 00:25:26.202 } 00:25:26.202 ] 00:25:26.202 }' 00:25:26.202 22:33:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:26.202 22:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:27.138 22:33:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:27.138 [2024-07-14 22:33:26.079998] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:27.138 [2024-07-14 22:33:26.080180] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:27.138 22:33:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:27.397 [2024-07-14 22:33:26.328234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:27.397 [2024-07-14 22:33:26.328436] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:27.397 [2024-07-14 22:33:26.328560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:27.397 [2024-07-14 22:33:26.328642] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:27.397 [2024-07-14 22:33:26.328858] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:27.397 [2024-07-14 22:33:26.328948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:27.397 [2024-07-14 22:33:26.329146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:27.397 [2024-07-14 22:33:26.329217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:27.397 22:33:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:27.655 [2024-07-14 22:33:26.657814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:27.655 BaseBdev1 00:25:27.655 22:33:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:27.655 22:33:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:27.655 22:33:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:27.655 22:33:26 -- common/autotest_common.sh@889 -- # local i 00:25:27.655 22:33:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:27.655 22:33:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:27.655 22:33:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:27.914 22:33:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:28.173 [ 00:25:28.173 { 00:25:28.173 "name": "BaseBdev1", 00:25:28.173 "aliases": [ 00:25:28.173 "f75f3e4c-8c30-4f25-987d-a1e0f7f02d69" 00:25:28.173 ], 00:25:28.173 "product_name": "Malloc disk", 00:25:28.173 "block_size": 512, 00:25:28.173 "num_blocks": 65536, 00:25:28.173 "uuid": "f75f3e4c-8c30-4f25-987d-a1e0f7f02d69", 00:25:28.173 "assigned_rate_limits": { 00:25:28.173 "rw_ios_per_sec": 0, 00:25:28.173 "rw_mbytes_per_sec": 0, 00:25:28.173 "r_mbytes_per_sec": 0, 00:25:28.173 "w_mbytes_per_sec": 0 00:25:28.173 }, 00:25:28.173 "claimed": true, 00:25:28.173 "claim_type": "exclusive_write", 00:25:28.173 "zoned": false, 00:25:28.173 "supported_io_types": { 00:25:28.173 "read": true, 00:25:28.173 "write": true, 00:25:28.173 "unmap": true, 00:25:28.173 "write_zeroes": true, 00:25:28.173 "flush": true, 00:25:28.173 "reset": true, 00:25:28.173 "compare": false, 00:25:28.173 "compare_and_write": false, 00:25:28.173 "abort": true, 00:25:28.173 "nvme_admin": false, 00:25:28.173 "nvme_io": false 00:25:28.173 }, 00:25:28.173 "memory_domains": [ 00:25:28.173 { 00:25:28.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.173 "dma_device_type": 2 00:25:28.173 } 00:25:28.173 ], 00:25:28.173 "driver_specific": {} 00:25:28.173 } 00:25:28.173 ] 00:25:28.173 22:33:27 -- common/autotest_common.sh@895 -- # return 0 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.173 22:33:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.431 22:33:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.431 "name": "Existed_Raid", 00:25:28.431 "uuid": "fe72802a-c9f7-4bda-b942-2fb965a7038b", 00:25:28.431 "strip_size_kb": 64, 00:25:28.431 "state": "configuring", 00:25:28.431 "raid_level": "raid5f", 00:25:28.431 "superblock": true, 00:25:28.431 "num_base_bdevs": 4, 00:25:28.431 "num_base_bdevs_discovered": 1, 00:25:28.431 "num_base_bdevs_operational": 4, 00:25:28.431 "base_bdevs_list": [ 00:25:28.431 { 00:25:28.431 "name": "BaseBdev1", 00:25:28.431 "uuid": "f75f3e4c-8c30-4f25-987d-a1e0f7f02d69", 00:25:28.431 "is_configured": true, 00:25:28.431 "data_offset": 2048, 00:25:28.431 "data_size": 63488 00:25:28.431 }, 00:25:28.431 { 00:25:28.431 "name": "BaseBdev2", 00:25:28.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.431 "is_configured": false, 00:25:28.431 "data_offset": 0, 00:25:28.431 "data_size": 0 00:25:28.431 }, 00:25:28.431 { 00:25:28.431 "name": "BaseBdev3", 00:25:28.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.431 "is_configured": false, 00:25:28.431 "data_offset": 0, 00:25:28.431 "data_size": 0 00:25:28.431 }, 00:25:28.431 { 00:25:28.431 "name": "BaseBdev4", 00:25:28.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.432 "is_configured": false, 00:25:28.432 "data_offset": 0, 00:25:28.432 "data_size": 0 00:25:28.432 } 00:25:28.432 ] 00:25:28.432 }' 00:25:28.432 22:33:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.432 22:33:27 -- common/autotest_common.sh@10 -- # set +x 00:25:28.999 22:33:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:29.258 [2024-07-14 22:33:28.246101] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.258 [2024-07-14 22:33:28.246242] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:29.258 22:33:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:29.258 22:33:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:29.516 22:33:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:30.083 BaseBdev1 00:25:30.083 22:33:28 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:30.083 22:33:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:30.083 22:33:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:30.083 22:33:28 -- common/autotest_common.sh@889 -- # local i 00:25:30.083 22:33:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:30.083 22:33:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:30.083 22:33:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:30.083 22:33:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:30.342 [ 00:25:30.342 { 00:25:30.342 "name": "BaseBdev1", 00:25:30.342 "aliases": [ 00:25:30.342 "c51e512f-fc78-48ec-8ba9-cfb63b9b7702" 00:25:30.342 ], 00:25:30.342 "product_name": "Malloc disk", 00:25:30.342 "block_size": 512, 00:25:30.342 "num_blocks": 65536, 00:25:30.342 "uuid": "c51e512f-fc78-48ec-8ba9-cfb63b9b7702", 00:25:30.342 "assigned_rate_limits": { 00:25:30.342 "rw_ios_per_sec": 0, 00:25:30.342 "rw_mbytes_per_sec": 0, 00:25:30.342 "r_mbytes_per_sec": 0, 00:25:30.342 "w_mbytes_per_sec": 0 00:25:30.342 }, 00:25:30.342 "claimed": false, 00:25:30.342 "zoned": false, 00:25:30.342 "supported_io_types": { 00:25:30.342 "read": true, 00:25:30.342 "write": true, 00:25:30.342 "unmap": true, 00:25:30.342 "write_zeroes": true, 00:25:30.342 "flush": true, 00:25:30.342 "reset": true, 00:25:30.342 "compare": false, 00:25:30.342 "compare_and_write": false, 00:25:30.342 "abort": true, 00:25:30.342 "nvme_admin": false, 00:25:30.342 "nvme_io": false 00:25:30.342 }, 00:25:30.342 "memory_domains": [ 00:25:30.342 { 00:25:30.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.342 "dma_device_type": 2 00:25:30.342 } 00:25:30.342 ], 00:25:30.342 "driver_specific": {} 00:25:30.342 } 00:25:30.342 ] 00:25:30.342 22:33:29 -- common/autotest_common.sh@895 -- # return 0 00:25:30.342 22:33:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:30.604 [2024-07-14 22:33:29.453797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:30.604 [2024-07-14 22:33:29.455435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:30.604 [2024-07-14 22:33:29.455621] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:30.604 [2024-07-14 22:33:29.455736] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:30.604 [2024-07-14 22:33:29.455921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:30.604 [2024-07-14 22:33:29.456007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:30.604 [2024-07-14 22:33:29.456116] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:30.604 22:33:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:30.604 22:33:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.605 "name": "Existed_Raid", 00:25:30.605 "uuid": "7ba6ad6d-830d-483c-8098-525c3bdd8308", 00:25:30.605 "strip_size_kb": 64, 00:25:30.605 "state": "configuring", 00:25:30.605 "raid_level": "raid5f", 00:25:30.605 "superblock": true, 00:25:30.605 "num_base_bdevs": 4, 00:25:30.605 "num_base_bdevs_discovered": 1, 00:25:30.605 "num_base_bdevs_operational": 4, 00:25:30.605 "base_bdevs_list": [ 00:25:30.605 { 00:25:30.605 "name": "BaseBdev1", 00:25:30.605 "uuid": "c51e512f-fc78-48ec-8ba9-cfb63b9b7702", 00:25:30.605 "is_configured": true, 00:25:30.605 "data_offset": 2048, 00:25:30.605 "data_size": 63488 00:25:30.605 }, 00:25:30.605 { 00:25:30.605 "name": "BaseBdev2", 00:25:30.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.605 "is_configured": false, 00:25:30.605 "data_offset": 0, 00:25:30.605 "data_size": 0 00:25:30.605 }, 00:25:30.605 { 00:25:30.605 "name": "BaseBdev3", 00:25:30.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.605 "is_configured": false, 00:25:30.605 "data_offset": 0, 00:25:30.605 "data_size": 0 00:25:30.605 }, 00:25:30.605 { 00:25:30.605 "name": "BaseBdev4", 00:25:30.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.605 "is_configured": false, 00:25:30.605 "data_offset": 0, 00:25:30.605 "data_size": 0 00:25:30.605 } 00:25:30.605 ] 00:25:30.605 }' 00:25:30.605 22:33:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.605 22:33:29 -- common/autotest_common.sh@10 -- # set +x 00:25:31.561 22:33:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:31.561 [2024-07-14 22:33:30.565911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:31.561 BaseBdev2 00:25:31.561 22:33:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:31.561 22:33:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:31.561 22:33:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:31.561 22:33:30 -- common/autotest_common.sh@889 -- # local i 00:25:31.561 22:33:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:31.561 22:33:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:31.561 22:33:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:31.819 22:33:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:32.078 [ 00:25:32.078 { 00:25:32.078 "name": "BaseBdev2", 00:25:32.078 "aliases": [ 00:25:32.078 "adfe3368-4568-4f18-88f1-658bbc735a68" 00:25:32.078 ], 00:25:32.078 "product_name": "Malloc disk", 00:25:32.078 "block_size": 512, 00:25:32.078 "num_blocks": 65536, 00:25:32.078 "uuid": "adfe3368-4568-4f18-88f1-658bbc735a68", 00:25:32.078 "assigned_rate_limits": { 00:25:32.078 "rw_ios_per_sec": 0, 00:25:32.078 "rw_mbytes_per_sec": 0, 00:25:32.078 "r_mbytes_per_sec": 0, 00:25:32.078 "w_mbytes_per_sec": 0 00:25:32.078 }, 00:25:32.078 "claimed": true, 00:25:32.078 "claim_type": "exclusive_write", 00:25:32.078 "zoned": false, 00:25:32.078 "supported_io_types": { 00:25:32.078 "read": true, 00:25:32.078 "write": true, 00:25:32.078 "unmap": true, 00:25:32.078 "write_zeroes": true, 00:25:32.078 "flush": true, 00:25:32.078 "reset": true, 00:25:32.078 "compare": false, 00:25:32.078 "compare_and_write": false, 00:25:32.078 "abort": true, 00:25:32.078 "nvme_admin": false, 00:25:32.078 "nvme_io": false 00:25:32.078 }, 00:25:32.078 "memory_domains": [ 00:25:32.078 { 00:25:32.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.078 "dma_device_type": 2 00:25:32.078 } 00:25:32.078 ], 00:25:32.078 "driver_specific": {} 00:25:32.078 } 00:25:32.078 ] 00:25:32.078 22:33:31 -- common/autotest_common.sh@895 -- # return 0 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:32.078 22:33:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.079 22:33:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.337 22:33:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:32.337 "name": "Existed_Raid", 00:25:32.337 "uuid": "7ba6ad6d-830d-483c-8098-525c3bdd8308", 00:25:32.337 "strip_size_kb": 64, 00:25:32.337 "state": "configuring", 00:25:32.337 "raid_level": "raid5f", 00:25:32.337 "superblock": true, 00:25:32.337 "num_base_bdevs": 4, 00:25:32.337 "num_base_bdevs_discovered": 2, 00:25:32.337 "num_base_bdevs_operational": 4, 00:25:32.337 "base_bdevs_list": [ 00:25:32.337 { 00:25:32.337 "name": "BaseBdev1", 00:25:32.337 "uuid": "c51e512f-fc78-48ec-8ba9-cfb63b9b7702", 00:25:32.337 "is_configured": true, 00:25:32.337 "data_offset": 2048, 00:25:32.337 "data_size": 63488 00:25:32.337 }, 00:25:32.337 { 00:25:32.337 "name": "BaseBdev2", 00:25:32.337 "uuid": "adfe3368-4568-4f18-88f1-658bbc735a68", 00:25:32.337 "is_configured": true, 00:25:32.337 "data_offset": 2048, 00:25:32.337 "data_size": 63488 00:25:32.337 }, 00:25:32.337 { 00:25:32.337 "name": "BaseBdev3", 00:25:32.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.337 "is_configured": false, 00:25:32.337 "data_offset": 0, 00:25:32.337 "data_size": 0 00:25:32.337 }, 00:25:32.337 { 00:25:32.337 "name": "BaseBdev4", 00:25:32.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.337 "is_configured": false, 00:25:32.338 "data_offset": 0, 00:25:32.338 "data_size": 0 00:25:32.338 } 00:25:32.338 ] 00:25:32.338 }' 00:25:32.338 22:33:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:32.338 22:33:31 -- common/autotest_common.sh@10 -- # set +x 00:25:33.274 22:33:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:33.274 [2024-07-14 22:33:32.197301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:33.274 BaseBdev3 00:25:33.274 22:33:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:33.274 22:33:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:33.274 22:33:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:33.274 22:33:32 -- common/autotest_common.sh@889 -- # local i 00:25:33.274 22:33:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:33.274 22:33:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:33.274 22:33:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:33.533 22:33:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:33.792 [ 00:25:33.792 { 00:25:33.792 "name": "BaseBdev3", 00:25:33.792 "aliases": [ 00:25:33.792 "3eea3d22-3537-42ec-b714-4bb853e2e1df" 00:25:33.792 ], 00:25:33.792 "product_name": "Malloc disk", 00:25:33.792 "block_size": 512, 00:25:33.792 "num_blocks": 65536, 00:25:33.792 "uuid": "3eea3d22-3537-42ec-b714-4bb853e2e1df", 00:25:33.792 "assigned_rate_limits": { 00:25:33.792 "rw_ios_per_sec": 0, 00:25:33.792 "rw_mbytes_per_sec": 0, 00:25:33.792 "r_mbytes_per_sec": 0, 00:25:33.792 "w_mbytes_per_sec": 0 00:25:33.792 }, 00:25:33.792 "claimed": true, 00:25:33.792 "claim_type": "exclusive_write", 00:25:33.792 "zoned": false, 00:25:33.792 "supported_io_types": { 00:25:33.792 "read": true, 00:25:33.792 "write": true, 00:25:33.792 "unmap": true, 00:25:33.792 "write_zeroes": true, 00:25:33.792 "flush": true, 00:25:33.792 "reset": true, 00:25:33.792 "compare": false, 00:25:33.792 "compare_and_write": false, 00:25:33.792 "abort": true, 00:25:33.792 "nvme_admin": false, 00:25:33.792 "nvme_io": false 00:25:33.792 }, 00:25:33.792 "memory_domains": [ 00:25:33.792 { 00:25:33.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.792 "dma_device_type": 2 00:25:33.792 } 00:25:33.792 ], 00:25:33.792 "driver_specific": {} 00:25:33.792 } 00:25:33.792 ] 00:25:33.792 22:33:32 -- common/autotest_common.sh@895 -- # return 0 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.792 22:33:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:34.050 22:33:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:34.050 "name": "Existed_Raid", 00:25:34.050 "uuid": "7ba6ad6d-830d-483c-8098-525c3bdd8308", 00:25:34.050 "strip_size_kb": 64, 00:25:34.050 "state": "configuring", 00:25:34.050 "raid_level": "raid5f", 00:25:34.050 "superblock": true, 00:25:34.050 "num_base_bdevs": 4, 00:25:34.050 "num_base_bdevs_discovered": 3, 00:25:34.051 "num_base_bdevs_operational": 4, 00:25:34.051 "base_bdevs_list": [ 00:25:34.051 { 00:25:34.051 "name": "BaseBdev1", 00:25:34.051 "uuid": "c51e512f-fc78-48ec-8ba9-cfb63b9b7702", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 2048, 00:25:34.051 "data_size": 63488 00:25:34.051 }, 00:25:34.051 { 00:25:34.051 "name": "BaseBdev2", 00:25:34.051 "uuid": "adfe3368-4568-4f18-88f1-658bbc735a68", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 2048, 00:25:34.051 "data_size": 63488 00:25:34.051 }, 00:25:34.051 { 00:25:34.051 "name": "BaseBdev3", 00:25:34.051 "uuid": "3eea3d22-3537-42ec-b714-4bb853e2e1df", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 2048, 00:25:34.051 "data_size": 63488 00:25:34.051 }, 00:25:34.051 { 00:25:34.051 "name": "BaseBdev4", 00:25:34.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.051 "is_configured": false, 00:25:34.051 "data_offset": 0, 00:25:34.051 "data_size": 0 00:25:34.051 } 00:25:34.051 ] 00:25:34.051 }' 00:25:34.051 22:33:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:34.051 22:33:32 -- common/autotest_common.sh@10 -- # set +x 00:25:34.642 22:33:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:34.900 [2024-07-14 22:33:33.738553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:34.900 [2024-07-14 22:33:33.739123] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:25:34.900 BaseBdev4 00:25:34.900 [2024-07-14 22:33:33.739244] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:34.900 [2024-07-14 22:33:33.739402] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:34.900 [2024-07-14 22:33:33.745213] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:25:34.900 [2024-07-14 22:33:33.745364] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:25:34.900 [2024-07-14 22:33:33.745655] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.900 22:33:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:34.900 22:33:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:25:34.900 22:33:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:34.900 22:33:33 -- common/autotest_common.sh@889 -- # local i 00:25:34.900 22:33:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:34.900 22:33:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:34.900 22:33:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:34.900 22:33:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:35.159 [ 00:25:35.159 { 00:25:35.159 "name": "BaseBdev4", 00:25:35.159 "aliases": [ 00:25:35.159 "462a6761-2b7d-42c4-8bb8-8a5801427dbd" 00:25:35.159 ], 00:25:35.159 "product_name": "Malloc disk", 00:25:35.159 "block_size": 512, 00:25:35.159 "num_blocks": 65536, 00:25:35.159 "uuid": "462a6761-2b7d-42c4-8bb8-8a5801427dbd", 00:25:35.159 "assigned_rate_limits": { 00:25:35.159 "rw_ios_per_sec": 0, 00:25:35.159 "rw_mbytes_per_sec": 0, 00:25:35.159 "r_mbytes_per_sec": 0, 00:25:35.159 "w_mbytes_per_sec": 0 00:25:35.159 }, 00:25:35.159 "claimed": true, 00:25:35.159 "claim_type": "exclusive_write", 00:25:35.159 "zoned": false, 00:25:35.159 "supported_io_types": { 00:25:35.159 "read": true, 00:25:35.159 "write": true, 00:25:35.159 "unmap": true, 00:25:35.159 "write_zeroes": true, 00:25:35.159 "flush": true, 00:25:35.159 "reset": true, 00:25:35.159 "compare": false, 00:25:35.159 "compare_and_write": false, 00:25:35.159 "abort": true, 00:25:35.159 "nvme_admin": false, 00:25:35.159 "nvme_io": false 00:25:35.159 }, 00:25:35.159 "memory_domains": [ 00:25:35.159 { 00:25:35.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.159 "dma_device_type": 2 00:25:35.159 } 00:25:35.159 ], 00:25:35.159 "driver_specific": {} 00:25:35.159 } 00:25:35.159 ] 00:25:35.159 22:33:34 -- common/autotest_common.sh@895 -- # return 0 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.159 22:33:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.417 22:33:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:35.417 "name": "Existed_Raid", 00:25:35.417 "uuid": "7ba6ad6d-830d-483c-8098-525c3bdd8308", 00:25:35.417 "strip_size_kb": 64, 00:25:35.417 "state": "online", 00:25:35.417 "raid_level": "raid5f", 00:25:35.417 "superblock": true, 00:25:35.417 "num_base_bdevs": 4, 00:25:35.417 "num_base_bdevs_discovered": 4, 00:25:35.417 "num_base_bdevs_operational": 4, 00:25:35.417 "base_bdevs_list": [ 00:25:35.417 { 00:25:35.417 "name": "BaseBdev1", 00:25:35.417 "uuid": "c51e512f-fc78-48ec-8ba9-cfb63b9b7702", 00:25:35.417 "is_configured": true, 00:25:35.417 "data_offset": 2048, 00:25:35.417 "data_size": 63488 00:25:35.417 }, 00:25:35.417 { 00:25:35.417 "name": "BaseBdev2", 00:25:35.417 "uuid": "adfe3368-4568-4f18-88f1-658bbc735a68", 00:25:35.417 "is_configured": true, 00:25:35.417 "data_offset": 2048, 00:25:35.417 "data_size": 63488 00:25:35.417 }, 00:25:35.417 { 00:25:35.417 "name": "BaseBdev3", 00:25:35.417 "uuid": "3eea3d22-3537-42ec-b714-4bb853e2e1df", 00:25:35.417 "is_configured": true, 00:25:35.417 "data_offset": 2048, 00:25:35.417 "data_size": 63488 00:25:35.417 }, 00:25:35.417 { 00:25:35.417 "name": "BaseBdev4", 00:25:35.417 "uuid": "462a6761-2b7d-42c4-8bb8-8a5801427dbd", 00:25:35.417 "is_configured": true, 00:25:35.417 "data_offset": 2048, 00:25:35.417 "data_size": 63488 00:25:35.417 } 00:25:35.417 ] 00:25:35.417 }' 00:25:35.417 22:33:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:35.417 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:36.351 [2024-07-14 22:33:35.313114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.351 22:33:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.610 22:33:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:36.610 "name": "Existed_Raid", 00:25:36.610 "uuid": "7ba6ad6d-830d-483c-8098-525c3bdd8308", 00:25:36.610 "strip_size_kb": 64, 00:25:36.610 "state": "online", 00:25:36.610 "raid_level": "raid5f", 00:25:36.610 "superblock": true, 00:25:36.610 "num_base_bdevs": 4, 00:25:36.610 "num_base_bdevs_discovered": 3, 00:25:36.610 "num_base_bdevs_operational": 3, 00:25:36.610 "base_bdevs_list": [ 00:25:36.610 { 00:25:36.610 "name": null, 00:25:36.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.610 "is_configured": false, 00:25:36.610 "data_offset": 2048, 00:25:36.610 "data_size": 63488 00:25:36.610 }, 00:25:36.610 { 00:25:36.610 "name": "BaseBdev2", 00:25:36.610 "uuid": "adfe3368-4568-4f18-88f1-658bbc735a68", 00:25:36.610 "is_configured": true, 00:25:36.610 "data_offset": 2048, 00:25:36.610 "data_size": 63488 00:25:36.610 }, 00:25:36.610 { 00:25:36.610 "name": "BaseBdev3", 00:25:36.610 "uuid": "3eea3d22-3537-42ec-b714-4bb853e2e1df", 00:25:36.610 "is_configured": true, 00:25:36.610 "data_offset": 2048, 00:25:36.610 "data_size": 63488 00:25:36.610 }, 00:25:36.610 { 00:25:36.610 "name": "BaseBdev4", 00:25:36.610 "uuid": "462a6761-2b7d-42c4-8bb8-8a5801427dbd", 00:25:36.610 "is_configured": true, 00:25:36.610 "data_offset": 2048, 00:25:36.610 "data_size": 63488 00:25:36.610 } 00:25:36.610 ] 00:25:36.610 }' 00:25:36.610 22:33:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:36.610 22:33:35 -- common/autotest_common.sh@10 -- # set +x 00:25:37.178 22:33:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:37.178 22:33:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:37.178 22:33:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.178 22:33:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:37.436 22:33:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:37.436 22:33:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:37.436 22:33:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:37.694 [2024-07-14 22:33:36.551369] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:37.694 [2024-07-14 22:33:36.551514] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.694 [2024-07-14 22:33:36.551661] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.694 22:33:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:37.694 22:33:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:37.694 22:33:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:37.694 22:33:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.953 22:33:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:37.953 22:33:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:37.953 22:33:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:38.212 [2024-07-14 22:33:37.040178] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:38.212 22:33:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:38.212 22:33:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:38.212 22:33:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.212 22:33:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:38.470 22:33:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:38.470 22:33:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:38.470 22:33:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:38.729 [2024-07-14 22:33:37.648291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:38.729 [2024-07-14 22:33:37.648651] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:25:38.729 22:33:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:38.729 22:33:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:38.729 22:33:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:38.729 22:33:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.989 22:33:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:38.989 22:33:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:38.989 22:33:37 -- bdev/bdev_raid.sh@287 -- # killprocess 133610 00:25:38.989 22:33:37 -- common/autotest_common.sh@926 -- # '[' -z 133610 ']' 00:25:38.989 22:33:37 -- common/autotest_common.sh@930 -- # kill -0 133610 00:25:38.989 22:33:37 -- common/autotest_common.sh@931 -- # uname 00:25:38.989 22:33:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:38.989 22:33:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133610 00:25:38.989 killing process with pid 133610 00:25:38.989 22:33:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:38.989 22:33:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:38.989 22:33:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133610' 00:25:38.989 22:33:37 -- common/autotest_common.sh@945 -- # kill 133610 00:25:38.989 22:33:37 -- common/autotest_common.sh@950 -- # wait 133610 00:25:38.989 [2024-07-14 22:33:37.962282] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.989 [2024-07-14 22:33:37.962405] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.366 ************************************ 00:25:40.366 END TEST raid5f_state_function_test_sb 00:25:40.366 ************************************ 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:40.366 00:25:40.366 real 0m15.326s 00:25:40.366 user 0m27.328s 00:25:40.366 sys 0m1.820s 00:25:40.366 22:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.366 22:33:39 -- common/autotest_common.sh@10 -- # set +x 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:25:40.366 22:33:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:40.366 22:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.366 22:33:39 -- common/autotest_common.sh@10 -- # set +x 00:25:40.366 ************************************ 00:25:40.366 START TEST raid5f_superblock_test 00:25:40.366 ************************************ 00:25:40.366 22:33:39 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=134097 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:40.366 22:33:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 134097 /var/tmp/spdk-raid.sock 00:25:40.366 22:33:39 -- common/autotest_common.sh@819 -- # '[' -z 134097 ']' 00:25:40.366 22:33:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:40.366 22:33:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.366 22:33:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:40.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:40.366 22:33:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.366 22:33:39 -- common/autotest_common.sh@10 -- # set +x 00:25:40.366 [2024-07-14 22:33:39.190650] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:40.366 [2024-07-14 22:33:39.191021] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134097 ] 00:25:40.366 [2024-07-14 22:33:39.361039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.625 [2024-07-14 22:33:39.564616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.884 [2024-07-14 22:33:39.729164] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.142 22:33:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.142 22:33:40 -- common/autotest_common.sh@852 -- # return 0 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:41.142 22:33:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:41.400 malloc1 00:25:41.401 22:33:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:41.659 [2024-07-14 22:33:40.529418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:41.659 [2024-07-14 22:33:40.529647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.659 [2024-07-14 22:33:40.529712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:41.659 [2024-07-14 22:33:40.529861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.659 [2024-07-14 22:33:40.531859] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.659 [2024-07-14 22:33:40.532031] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:41.659 pt1 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:41.659 22:33:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:41.917 malloc2 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:41.917 [2024-07-14 22:33:40.969515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:41.917 [2024-07-14 22:33:40.969704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.917 [2024-07-14 22:33:40.969774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:41.917 [2024-07-14 22:33:40.969914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.917 [2024-07-14 22:33:40.971793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.917 [2024-07-14 22:33:40.971952] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:41.917 pt2 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:41.917 22:33:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:42.175 malloc3 00:25:42.175 22:33:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:42.434 [2024-07-14 22:33:41.378862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:42.434 [2024-07-14 22:33:41.379056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.434 [2024-07-14 22:33:41.379125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:42.434 [2024-07-14 22:33:41.379285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.434 [2024-07-14 22:33:41.381179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.434 [2024-07-14 22:33:41.381357] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:42.434 pt3 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:42.434 22:33:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:42.692 malloc4 00:25:42.692 22:33:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:42.951 [2024-07-14 22:33:41.795464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:42.951 [2024-07-14 22:33:41.795655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.951 [2024-07-14 22:33:41.795726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:42.951 [2024-07-14 22:33:41.795874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.951 [2024-07-14 22:33:41.797796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.951 [2024-07-14 22:33:41.797956] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:42.951 pt4 00:25:42.951 22:33:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:42.951 22:33:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:42.952 22:33:41 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:42.952 [2024-07-14 22:33:41.995540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:42.952 [2024-07-14 22:33:41.997397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:42.952 [2024-07-14 22:33:41.997576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:42.952 [2024-07-14 22:33:41.997694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:42.952 [2024-07-14 22:33:41.997982] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:42.952 [2024-07-14 22:33:41.998095] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:42.952 [2024-07-14 22:33:41.998249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:42.952 [2024-07-14 22:33:42.003722] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:42.952 [2024-07-14 22:33:42.003885] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:42.952 [2024-07-14 22:33:42.004147] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.952 22:33:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.210 22:33:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:43.210 "name": "raid_bdev1", 00:25:43.210 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:43.210 "strip_size_kb": 64, 00:25:43.210 "state": "online", 00:25:43.210 "raid_level": "raid5f", 00:25:43.210 "superblock": true, 00:25:43.210 "num_base_bdevs": 4, 00:25:43.210 "num_base_bdevs_discovered": 4, 00:25:43.210 "num_base_bdevs_operational": 4, 00:25:43.210 "base_bdevs_list": [ 00:25:43.210 { 00:25:43.210 "name": "pt1", 00:25:43.210 "uuid": "68190822-e79d-5a67-b05c-c3387559c182", 00:25:43.210 "is_configured": true, 00:25:43.210 "data_offset": 2048, 00:25:43.210 "data_size": 63488 00:25:43.210 }, 00:25:43.210 { 00:25:43.210 "name": "pt2", 00:25:43.210 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:43.210 "is_configured": true, 00:25:43.210 "data_offset": 2048, 00:25:43.210 "data_size": 63488 00:25:43.210 }, 00:25:43.210 { 00:25:43.210 "name": "pt3", 00:25:43.210 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:43.210 "is_configured": true, 00:25:43.210 "data_offset": 2048, 00:25:43.210 "data_size": 63488 00:25:43.210 }, 00:25:43.210 { 00:25:43.210 "name": "pt4", 00:25:43.210 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:43.210 "is_configured": true, 00:25:43.210 "data_offset": 2048, 00:25:43.210 "data_size": 63488 00:25:43.210 } 00:25:43.210 ] 00:25:43.210 }' 00:25:43.210 22:33:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:43.210 22:33:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.145 22:33:42 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:44.145 22:33:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:44.145 [2024-07-14 22:33:43.042327] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:44.145 22:33:43 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=808c8938-aa1a-4fe2-98a6-d21b68567041 00:25:44.145 22:33:43 -- bdev/bdev_raid.sh@380 -- # '[' -z 808c8938-aa1a-4fe2-98a6-d21b68567041 ']' 00:25:44.145 22:33:43 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:44.404 [2024-07-14 22:33:43.274256] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:44.404 [2024-07-14 22:33:43.274396] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:44.404 [2024-07-14 22:33:43.274590] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:44.404 [2024-07-14 22:33:43.274812] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:44.404 [2024-07-14 22:33:43.274935] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:44.404 22:33:43 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.404 22:33:43 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:44.664 22:33:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:44.664 22:33:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:44.664 22:33:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.664 22:33:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:44.664 22:33:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.664 22:33:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:44.926 22:33:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.926 22:33:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:45.184 22:33:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:45.184 22:33:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:45.442 22:33:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:45.442 22:33:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:45.700 22:33:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:45.700 22:33:44 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:45.700 22:33:44 -- common/autotest_common.sh@640 -- # local es=0 00:25:45.700 22:33:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:45.700 22:33:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:45.700 22:33:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.700 22:33:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:45.700 22:33:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.700 22:33:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:45.700 22:33:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.700 22:33:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:45.700 22:33:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:45.701 22:33:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:45.701 [2024-07-14 22:33:44.742496] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:45.701 [2024-07-14 22:33:44.744551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:45.701 [2024-07-14 22:33:44.744748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:45.701 [2024-07-14 22:33:44.744970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:45.701 [2024-07-14 22:33:44.745153] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:45.701 [2024-07-14 22:33:44.745257] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:45.701 [2024-07-14 22:33:44.745338] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:45.701 [2024-07-14 22:33:44.745438] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:25:45.701 [2024-07-14 22:33:44.745581] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:45.701 [2024-07-14 22:33:44.745665] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:25:45.701 request: 00:25:45.701 { 00:25:45.701 "name": "raid_bdev1", 00:25:45.701 "raid_level": "raid5f", 00:25:45.701 "base_bdevs": [ 00:25:45.701 "malloc1", 00:25:45.701 "malloc2", 00:25:45.701 "malloc3", 00:25:45.701 "malloc4" 00:25:45.701 ], 00:25:45.701 "superblock": false, 00:25:45.701 "strip_size_kb": 64, 00:25:45.701 "method": "bdev_raid_create", 00:25:45.701 "req_id": 1 00:25:45.701 } 00:25:45.701 Got JSON-RPC error response 00:25:45.701 response: 00:25:45.701 { 00:25:45.701 "code": -17, 00:25:45.701 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:45.701 } 00:25:45.701 22:33:44 -- common/autotest_common.sh@643 -- # es=1 00:25:45.701 22:33:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:45.701 22:33:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:45.701 22:33:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:45.701 22:33:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:45.701 22:33:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.959 22:33:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:45.959 22:33:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:45.959 22:33:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:46.217 [2024-07-14 22:33:45.134508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:46.217 [2024-07-14 22:33:45.134720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.217 [2024-07-14 22:33:45.134779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:46.217 [2024-07-14 22:33:45.134886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.217 [2024-07-14 22:33:45.136793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.217 [2024-07-14 22:33:45.136984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:46.217 [2024-07-14 22:33:45.137162] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:46.217 [2024-07-14 22:33:45.137337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:46.217 pt1 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.217 22:33:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.475 22:33:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:46.475 "name": "raid_bdev1", 00:25:46.475 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:46.475 "strip_size_kb": 64, 00:25:46.475 "state": "configuring", 00:25:46.475 "raid_level": "raid5f", 00:25:46.475 "superblock": true, 00:25:46.475 "num_base_bdevs": 4, 00:25:46.475 "num_base_bdevs_discovered": 1, 00:25:46.475 "num_base_bdevs_operational": 4, 00:25:46.475 "base_bdevs_list": [ 00:25:46.475 { 00:25:46.475 "name": "pt1", 00:25:46.475 "uuid": "68190822-e79d-5a67-b05c-c3387559c182", 00:25:46.475 "is_configured": true, 00:25:46.475 "data_offset": 2048, 00:25:46.475 "data_size": 63488 00:25:46.475 }, 00:25:46.475 { 00:25:46.475 "name": null, 00:25:46.475 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:46.475 "is_configured": false, 00:25:46.475 "data_offset": 2048, 00:25:46.475 "data_size": 63488 00:25:46.475 }, 00:25:46.475 { 00:25:46.475 "name": null, 00:25:46.475 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:46.475 "is_configured": false, 00:25:46.475 "data_offset": 2048, 00:25:46.475 "data_size": 63488 00:25:46.475 }, 00:25:46.475 { 00:25:46.475 "name": null, 00:25:46.475 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:46.475 "is_configured": false, 00:25:46.475 "data_offset": 2048, 00:25:46.475 "data_size": 63488 00:25:46.475 } 00:25:46.475 ] 00:25:46.475 }' 00:25:46.475 22:33:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:46.475 22:33:45 -- common/autotest_common.sh@10 -- # set +x 00:25:47.040 22:33:46 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:25:47.040 22:33:46 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:47.298 [2024-07-14 22:33:46.266743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:47.298 [2024-07-14 22:33:46.266795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.298 [2024-07-14 22:33:46.266831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:47.298 [2024-07-14 22:33:46.266849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.298 [2024-07-14 22:33:46.267203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.299 [2024-07-14 22:33:46.267242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:47.299 [2024-07-14 22:33:46.267321] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:47.299 [2024-07-14 22:33:46.267353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:47.299 pt2 00:25:47.299 22:33:46 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:47.557 [2024-07-14 22:33:46.518790] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.557 22:33:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.815 22:33:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.815 "name": "raid_bdev1", 00:25:47.815 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:47.815 "strip_size_kb": 64, 00:25:47.815 "state": "configuring", 00:25:47.815 "raid_level": "raid5f", 00:25:47.815 "superblock": true, 00:25:47.815 "num_base_bdevs": 4, 00:25:47.815 "num_base_bdevs_discovered": 1, 00:25:47.815 "num_base_bdevs_operational": 4, 00:25:47.815 "base_bdevs_list": [ 00:25:47.815 { 00:25:47.815 "name": "pt1", 00:25:47.815 "uuid": "68190822-e79d-5a67-b05c-c3387559c182", 00:25:47.815 "is_configured": true, 00:25:47.815 "data_offset": 2048, 00:25:47.815 "data_size": 63488 00:25:47.815 }, 00:25:47.815 { 00:25:47.815 "name": null, 00:25:47.815 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:47.815 "is_configured": false, 00:25:47.815 "data_offset": 2048, 00:25:47.815 "data_size": 63488 00:25:47.815 }, 00:25:47.815 { 00:25:47.815 "name": null, 00:25:47.815 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:47.815 "is_configured": false, 00:25:47.815 "data_offset": 2048, 00:25:47.815 "data_size": 63488 00:25:47.815 }, 00:25:47.815 { 00:25:47.815 "name": null, 00:25:47.815 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:47.815 "is_configured": false, 00:25:47.816 "data_offset": 2048, 00:25:47.816 "data_size": 63488 00:25:47.816 } 00:25:47.816 ] 00:25:47.816 }' 00:25:47.816 22:33:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.816 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:25:48.383 22:33:47 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:48.383 22:33:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:48.383 22:33:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:48.641 [2024-07-14 22:33:47.659006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:48.641 [2024-07-14 22:33:47.659061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.641 [2024-07-14 22:33:47.659092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:48.641 [2024-07-14 22:33:47.659109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.641 [2024-07-14 22:33:47.659439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.641 [2024-07-14 22:33:47.659493] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:48.641 [2024-07-14 22:33:47.659568] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:48.641 [2024-07-14 22:33:47.659598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:48.641 pt2 00:25:48.641 22:33:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:48.641 22:33:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:48.641 22:33:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:48.900 [2024-07-14 22:33:47.827051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:48.900 [2024-07-14 22:33:47.827103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.900 [2024-07-14 22:33:47.827128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:48.900 [2024-07-14 22:33:47.827149] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.900 [2024-07-14 22:33:47.827470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.900 [2024-07-14 22:33:47.827522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:48.900 [2024-07-14 22:33:47.827593] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:48.900 [2024-07-14 22:33:47.827616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:48.900 pt3 00:25:48.900 22:33:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:48.900 22:33:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:48.900 22:33:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:49.158 [2024-07-14 22:33:48.023111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:49.158 [2024-07-14 22:33:48.023168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.158 [2024-07-14 22:33:48.023195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:49.158 [2024-07-14 22:33:48.023215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.158 [2024-07-14 22:33:48.023527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.158 [2024-07-14 22:33:48.023575] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:49.158 [2024-07-14 22:33:48.023652] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:49.158 [2024-07-14 22:33:48.023683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:49.158 [2024-07-14 22:33:48.023840] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:49.158 [2024-07-14 22:33:48.023862] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:49.158 [2024-07-14 22:33:48.023961] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:49.158 [2024-07-14 22:33:48.028967] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:49.158 [2024-07-14 22:33:48.028990] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:49.158 [2024-07-14 22:33:48.029144] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.158 pt4 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.158 22:33:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.418 22:33:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:49.418 "name": "raid_bdev1", 00:25:49.418 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:49.418 "strip_size_kb": 64, 00:25:49.418 "state": "online", 00:25:49.418 "raid_level": "raid5f", 00:25:49.418 "superblock": true, 00:25:49.418 "num_base_bdevs": 4, 00:25:49.418 "num_base_bdevs_discovered": 4, 00:25:49.418 "num_base_bdevs_operational": 4, 00:25:49.418 "base_bdevs_list": [ 00:25:49.418 { 00:25:49.418 "name": "pt1", 00:25:49.418 "uuid": "68190822-e79d-5a67-b05c-c3387559c182", 00:25:49.418 "is_configured": true, 00:25:49.418 "data_offset": 2048, 00:25:49.418 "data_size": 63488 00:25:49.418 }, 00:25:49.418 { 00:25:49.418 "name": "pt2", 00:25:49.418 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:49.418 "is_configured": true, 00:25:49.418 "data_offset": 2048, 00:25:49.418 "data_size": 63488 00:25:49.418 }, 00:25:49.418 { 00:25:49.418 "name": "pt3", 00:25:49.418 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:49.418 "is_configured": true, 00:25:49.418 "data_offset": 2048, 00:25:49.418 "data_size": 63488 00:25:49.418 }, 00:25:49.418 { 00:25:49.418 "name": "pt4", 00:25:49.418 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:49.418 "is_configured": true, 00:25:49.418 "data_offset": 2048, 00:25:49.418 "data_size": 63488 00:25:49.418 } 00:25:49.418 ] 00:25:49.418 }' 00:25:49.418 22:33:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:49.418 22:33:48 -- common/autotest_common.sh@10 -- # set +x 00:25:49.985 22:33:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.985 22:33:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:50.243 [2024-07-14 22:33:49.127163] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:50.243 22:33:49 -- bdev/bdev_raid.sh@430 -- # '[' 808c8938-aa1a-4fe2-98a6-d21b68567041 '!=' 808c8938-aa1a-4fe2-98a6-d21b68567041 ']' 00:25:50.243 22:33:49 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:50.243 22:33:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:50.243 22:33:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:50.243 22:33:49 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:50.501 [2024-07-14 22:33:49.367148] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.501 22:33:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.758 22:33:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.758 "name": "raid_bdev1", 00:25:50.758 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:50.758 "strip_size_kb": 64, 00:25:50.758 "state": "online", 00:25:50.758 "raid_level": "raid5f", 00:25:50.758 "superblock": true, 00:25:50.758 "num_base_bdevs": 4, 00:25:50.758 "num_base_bdevs_discovered": 3, 00:25:50.758 "num_base_bdevs_operational": 3, 00:25:50.758 "base_bdevs_list": [ 00:25:50.758 { 00:25:50.758 "name": null, 00:25:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.758 "is_configured": false, 00:25:50.758 "data_offset": 2048, 00:25:50.758 "data_size": 63488 00:25:50.758 }, 00:25:50.758 { 00:25:50.758 "name": "pt2", 00:25:50.758 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:50.758 "is_configured": true, 00:25:50.758 "data_offset": 2048, 00:25:50.758 "data_size": 63488 00:25:50.758 }, 00:25:50.758 { 00:25:50.758 "name": "pt3", 00:25:50.758 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:50.758 "is_configured": true, 00:25:50.758 "data_offset": 2048, 00:25:50.758 "data_size": 63488 00:25:50.758 }, 00:25:50.758 { 00:25:50.758 "name": "pt4", 00:25:50.758 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:50.758 "is_configured": true, 00:25:50.758 "data_offset": 2048, 00:25:50.758 "data_size": 63488 00:25:50.758 } 00:25:50.758 ] 00:25:50.758 }' 00:25:50.758 22:33:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.758 22:33:49 -- common/autotest_common.sh@10 -- # set +x 00:25:51.324 22:33:50 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:51.583 [2024-07-14 22:33:50.535331] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:51.583 [2024-07-14 22:33:50.535355] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:51.583 [2024-07-14 22:33:50.535400] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.583 [2024-07-14 22:33:50.535457] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.583 [2024-07-14 22:33:50.535467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:51.583 22:33:50 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.583 22:33:50 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:51.842 22:33:50 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:51.842 22:33:50 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:51.842 22:33:50 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:51.842 22:33:50 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:51.842 22:33:50 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:52.099 22:33:50 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:52.099 22:33:50 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:52.099 22:33:50 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:52.357 22:33:51 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:52.615 [2024-07-14 22:33:51.568240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:52.615 [2024-07-14 22:33:51.568432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.615 [2024-07-14 22:33:51.568484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:52.615 [2024-07-14 22:33:51.568524] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.615 [2024-07-14 22:33:51.571942] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.615 [2024-07-14 22:33:51.572053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:52.615 [2024-07-14 22:33:51.572236] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:52.615 [2024-07-14 22:33:51.572312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:52.615 pt2 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.615 22:33:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.872 22:33:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:52.872 "name": "raid_bdev1", 00:25:52.872 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:52.872 "strip_size_kb": 64, 00:25:52.872 "state": "configuring", 00:25:52.872 "raid_level": "raid5f", 00:25:52.872 "superblock": true, 00:25:52.872 "num_base_bdevs": 4, 00:25:52.872 "num_base_bdevs_discovered": 1, 00:25:52.872 "num_base_bdevs_operational": 3, 00:25:52.872 "base_bdevs_list": [ 00:25:52.872 { 00:25:52.872 "name": null, 00:25:52.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.872 "is_configured": false, 00:25:52.872 "data_offset": 2048, 00:25:52.872 "data_size": 63488 00:25:52.872 }, 00:25:52.872 { 00:25:52.872 "name": "pt2", 00:25:52.872 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:52.872 "is_configured": true, 00:25:52.872 "data_offset": 2048, 00:25:52.872 "data_size": 63488 00:25:52.872 }, 00:25:52.872 { 00:25:52.872 "name": null, 00:25:52.872 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:52.872 "is_configured": false, 00:25:52.872 "data_offset": 2048, 00:25:52.872 "data_size": 63488 00:25:52.872 }, 00:25:52.872 { 00:25:52.872 "name": null, 00:25:52.872 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:52.872 "is_configured": false, 00:25:52.872 "data_offset": 2048, 00:25:52.872 "data_size": 63488 00:25:52.872 } 00:25:52.872 ] 00:25:52.872 }' 00:25:52.872 22:33:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:52.872 22:33:51 -- common/autotest_common.sh@10 -- # set +x 00:25:53.436 22:33:52 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:53.436 22:33:52 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:53.436 22:33:52 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:53.694 [2024-07-14 22:33:52.700653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:53.694 [2024-07-14 22:33:52.700809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.694 [2024-07-14 22:33:52.700861] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:53.694 [2024-07-14 22:33:52.700896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.694 [2024-07-14 22:33:52.701438] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.694 [2024-07-14 22:33:52.701489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:53.694 [2024-07-14 22:33:52.701637] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:53.694 [2024-07-14 22:33:52.701669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:53.694 pt3 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.694 22:33:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.952 22:33:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.952 "name": "raid_bdev1", 00:25:53.952 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:53.952 "strip_size_kb": 64, 00:25:53.952 "state": "configuring", 00:25:53.952 "raid_level": "raid5f", 00:25:53.952 "superblock": true, 00:25:53.952 "num_base_bdevs": 4, 00:25:53.952 "num_base_bdevs_discovered": 2, 00:25:53.952 "num_base_bdevs_operational": 3, 00:25:53.952 "base_bdevs_list": [ 00:25:53.952 { 00:25:53.952 "name": null, 00:25:53.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.952 "is_configured": false, 00:25:53.952 "data_offset": 2048, 00:25:53.952 "data_size": 63488 00:25:53.952 }, 00:25:53.952 { 00:25:53.952 "name": "pt2", 00:25:53.952 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:53.952 "is_configured": true, 00:25:53.952 "data_offset": 2048, 00:25:53.952 "data_size": 63488 00:25:53.952 }, 00:25:53.952 { 00:25:53.952 "name": "pt3", 00:25:53.952 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:53.952 "is_configured": true, 00:25:53.952 "data_offset": 2048, 00:25:53.952 "data_size": 63488 00:25:53.952 }, 00:25:53.952 { 00:25:53.952 "name": null, 00:25:53.952 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:53.952 "is_configured": false, 00:25:53.952 "data_offset": 2048, 00:25:53.952 "data_size": 63488 00:25:53.952 } 00:25:53.952 ] 00:25:53.952 }' 00:25:53.952 22:33:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.952 22:33:52 -- common/autotest_common.sh@10 -- # set +x 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:54.882 [2024-07-14 22:33:53.908799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:54.882 [2024-07-14 22:33:53.908877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.882 [2024-07-14 22:33:53.908919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:54.882 [2024-07-14 22:33:53.908941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.882 [2024-07-14 22:33:53.909390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.882 [2024-07-14 22:33:53.909422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:54.882 [2024-07-14 22:33:53.909518] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:54.882 [2024-07-14 22:33:53.909548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:54.882 [2024-07-14 22:33:53.909678] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:25:54.882 [2024-07-14 22:33:53.909693] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:54.882 [2024-07-14 22:33:53.909800] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:54.882 [2024-07-14 22:33:53.915066] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:25:54.882 [2024-07-14 22:33:53.915092] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:25:54.882 pt4 00:25:54.882 [2024-07-14 22:33:53.915357] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.882 22:33:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.139 22:33:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.139 "name": "raid_bdev1", 00:25:55.139 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:55.139 "strip_size_kb": 64, 00:25:55.139 "state": "online", 00:25:55.139 "raid_level": "raid5f", 00:25:55.139 "superblock": true, 00:25:55.139 "num_base_bdevs": 4, 00:25:55.139 "num_base_bdevs_discovered": 3, 00:25:55.139 "num_base_bdevs_operational": 3, 00:25:55.139 "base_bdevs_list": [ 00:25:55.139 { 00:25:55.139 "name": null, 00:25:55.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.139 "is_configured": false, 00:25:55.139 "data_offset": 2048, 00:25:55.139 "data_size": 63488 00:25:55.139 }, 00:25:55.139 { 00:25:55.139 "name": "pt2", 00:25:55.139 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:55.139 "is_configured": true, 00:25:55.139 "data_offset": 2048, 00:25:55.139 "data_size": 63488 00:25:55.139 }, 00:25:55.139 { 00:25:55.139 "name": "pt3", 00:25:55.139 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:55.139 "is_configured": true, 00:25:55.139 "data_offset": 2048, 00:25:55.139 "data_size": 63488 00:25:55.139 }, 00:25:55.139 { 00:25:55.139 "name": "pt4", 00:25:55.139 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:55.139 "is_configured": true, 00:25:55.139 "data_offset": 2048, 00:25:55.139 "data_size": 63488 00:25:55.139 } 00:25:55.139 ] 00:25:55.139 }' 00:25:55.139 22:33:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.139 22:33:54 -- common/autotest_common.sh@10 -- # set +x 00:25:55.706 22:33:54 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:55.706 22:33:54 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:55.964 [2024-07-14 22:33:54.961495] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:55.964 [2024-07-14 22:33:54.961533] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:55.964 [2024-07-14 22:33:54.961618] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:55.964 [2024-07-14 22:33:54.961696] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:55.964 [2024-07-14 22:33:54.961709] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:55.964 22:33:54 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.964 22:33:54 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:56.223 22:33:55 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:56.223 22:33:55 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:56.223 22:33:55 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:56.481 [2024-07-14 22:33:55.413565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:56.481 [2024-07-14 22:33:55.413678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.481 [2024-07-14 22:33:55.413724] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:56.481 [2024-07-14 22:33:55.413749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.481 [2024-07-14 22:33:55.415862] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.481 [2024-07-14 22:33:55.415930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:56.481 [2024-07-14 22:33:55.416030] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:56.481 [2024-07-14 22:33:55.416082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:56.481 pt1 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.481 22:33:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.740 22:33:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:56.740 "name": "raid_bdev1", 00:25:56.740 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:56.740 "strip_size_kb": 64, 00:25:56.740 "state": "configuring", 00:25:56.740 "raid_level": "raid5f", 00:25:56.740 "superblock": true, 00:25:56.740 "num_base_bdevs": 4, 00:25:56.740 "num_base_bdevs_discovered": 1, 00:25:56.740 "num_base_bdevs_operational": 4, 00:25:56.740 "base_bdevs_list": [ 00:25:56.740 { 00:25:56.740 "name": "pt1", 00:25:56.740 "uuid": "68190822-e79d-5a67-b05c-c3387559c182", 00:25:56.740 "is_configured": true, 00:25:56.740 "data_offset": 2048, 00:25:56.740 "data_size": 63488 00:25:56.740 }, 00:25:56.740 { 00:25:56.740 "name": null, 00:25:56.740 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:56.740 "is_configured": false, 00:25:56.740 "data_offset": 2048, 00:25:56.740 "data_size": 63488 00:25:56.740 }, 00:25:56.740 { 00:25:56.740 "name": null, 00:25:56.740 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:56.740 "is_configured": false, 00:25:56.740 "data_offset": 2048, 00:25:56.740 "data_size": 63488 00:25:56.740 }, 00:25:56.740 { 00:25:56.740 "name": null, 00:25:56.740 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:56.740 "is_configured": false, 00:25:56.740 "data_offset": 2048, 00:25:56.740 "data_size": 63488 00:25:56.740 } 00:25:56.740 ] 00:25:56.740 }' 00:25:56.740 22:33:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:56.740 22:33:55 -- common/autotest_common.sh@10 -- # set +x 00:25:57.307 22:33:56 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:57.307 22:33:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:57.307 22:33:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:57.600 22:33:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:57.600 22:33:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:57.600 22:33:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:57.859 22:33:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:57.859 22:33:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:57.859 22:33:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:58.130 22:33:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:58.130 22:33:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:58.130 22:33:57 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:58.130 22:33:57 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:58.400 [2024-07-14 22:33:57.205920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:58.400 [2024-07-14 22:33:57.205995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.400 [2024-07-14 22:33:57.206033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:25:58.400 [2024-07-14 22:33:57.206069] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.400 [2024-07-14 22:33:57.206442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.400 [2024-07-14 22:33:57.206502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:58.400 [2024-07-14 22:33:57.206588] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:58.400 [2024-07-14 22:33:57.206604] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:58.400 [2024-07-14 22:33:57.206611] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:58.400 [2024-07-14 22:33:57.206635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:25:58.400 [2024-07-14 22:33:57.206695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:58.400 pt4 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.400 22:33:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.659 22:33:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:58.659 "name": "raid_bdev1", 00:25:58.659 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:25:58.659 "strip_size_kb": 64, 00:25:58.659 "state": "configuring", 00:25:58.659 "raid_level": "raid5f", 00:25:58.659 "superblock": true, 00:25:58.659 "num_base_bdevs": 4, 00:25:58.659 "num_base_bdevs_discovered": 1, 00:25:58.659 "num_base_bdevs_operational": 3, 00:25:58.659 "base_bdevs_list": [ 00:25:58.659 { 00:25:58.659 "name": null, 00:25:58.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.659 "is_configured": false, 00:25:58.659 "data_offset": 2048, 00:25:58.659 "data_size": 63488 00:25:58.659 }, 00:25:58.659 { 00:25:58.659 "name": null, 00:25:58.659 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:25:58.659 "is_configured": false, 00:25:58.659 "data_offset": 2048, 00:25:58.659 "data_size": 63488 00:25:58.659 }, 00:25:58.659 { 00:25:58.659 "name": null, 00:25:58.659 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:25:58.659 "is_configured": false, 00:25:58.659 "data_offset": 2048, 00:25:58.659 "data_size": 63488 00:25:58.659 }, 00:25:58.659 { 00:25:58.659 "name": "pt4", 00:25:58.659 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:25:58.659 "is_configured": true, 00:25:58.659 "data_offset": 2048, 00:25:58.659 "data_size": 63488 00:25:58.659 } 00:25:58.659 ] 00:25:58.659 }' 00:25:58.659 22:33:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:58.659 22:33:57 -- common/autotest_common.sh@10 -- # set +x 00:25:59.224 22:33:58 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:59.225 22:33:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:59.225 22:33:58 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:59.483 [2024-07-14 22:33:58.314166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:59.484 [2024-07-14 22:33:58.314294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.484 [2024-07-14 22:33:58.314336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:59.484 [2024-07-14 22:33:58.314365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.484 [2024-07-14 22:33:58.314833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.484 [2024-07-14 22:33:58.314899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:59.484 [2024-07-14 22:33:58.314995] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:59.484 [2024-07-14 22:33:58.315023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:59.484 pt2 00:25:59.484 22:33:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:59.484 22:33:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:59.484 22:33:58 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:59.484 [2024-07-14 22:33:58.554217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:59.484 [2024-07-14 22:33:58.554294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.484 [2024-07-14 22:33:58.554325] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:25:59.484 [2024-07-14 22:33:58.554350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.742 [2024-07-14 22:33:58.554697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.742 [2024-07-14 22:33:58.554755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:59.742 [2024-07-14 22:33:58.554893] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:59.743 [2024-07-14 22:33:58.554923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:59.743 [2024-07-14 22:33:58.555045] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:25:59.743 [2024-07-14 22:33:58.555073] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:59.743 [2024-07-14 22:33:58.555188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:59.743 [2024-07-14 22:33:58.560388] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:25:59.743 [2024-07-14 22:33:58.560414] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:25:59.743 [2024-07-14 22:33:58.560633] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.743 pt3 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.743 22:33:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.002 22:33:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.002 "name": "raid_bdev1", 00:26:00.002 "uuid": "808c8938-aa1a-4fe2-98a6-d21b68567041", 00:26:00.002 "strip_size_kb": 64, 00:26:00.002 "state": "online", 00:26:00.002 "raid_level": "raid5f", 00:26:00.002 "superblock": true, 00:26:00.002 "num_base_bdevs": 4, 00:26:00.002 "num_base_bdevs_discovered": 3, 00:26:00.002 "num_base_bdevs_operational": 3, 00:26:00.002 "base_bdevs_list": [ 00:26:00.002 { 00:26:00.002 "name": null, 00:26:00.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.002 "is_configured": false, 00:26:00.002 "data_offset": 2048, 00:26:00.002 "data_size": 63488 00:26:00.002 }, 00:26:00.002 { 00:26:00.002 "name": "pt2", 00:26:00.002 "uuid": "51617fc5-076f-582b-b9cd-b7930d9387ab", 00:26:00.002 "is_configured": true, 00:26:00.002 "data_offset": 2048, 00:26:00.002 "data_size": 63488 00:26:00.002 }, 00:26:00.002 { 00:26:00.002 "name": "pt3", 00:26:00.002 "uuid": "9eada4e3-124a-561d-949c-0981d098947c", 00:26:00.002 "is_configured": true, 00:26:00.002 "data_offset": 2048, 00:26:00.002 "data_size": 63488 00:26:00.002 }, 00:26:00.002 { 00:26:00.002 "name": "pt4", 00:26:00.002 "uuid": "8431fb9e-e07b-5f94-a25b-cb947efe89fe", 00:26:00.002 "is_configured": true, 00:26:00.002 "data_offset": 2048, 00:26:00.002 "data_size": 63488 00:26:00.002 } 00:26:00.002 ] 00:26:00.002 }' 00:26:00.002 22:33:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.002 22:33:58 -- common/autotest_common.sh@10 -- # set +x 00:26:00.569 22:33:59 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:00.569 22:33:59 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:00.828 [2024-07-14 22:33:59.731025] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.828 22:33:59 -- bdev/bdev_raid.sh@506 -- # '[' 808c8938-aa1a-4fe2-98a6-d21b68567041 '!=' 808c8938-aa1a-4fe2-98a6-d21b68567041 ']' 00:26:00.828 22:33:59 -- bdev/bdev_raid.sh@511 -- # killprocess 134097 00:26:00.828 22:33:59 -- common/autotest_common.sh@926 -- # '[' -z 134097 ']' 00:26:00.828 22:33:59 -- common/autotest_common.sh@930 -- # kill -0 134097 00:26:00.828 22:33:59 -- common/autotest_common.sh@931 -- # uname 00:26:00.828 22:33:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:00.828 22:33:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134097 00:26:00.828 killing process with pid 134097 00:26:00.828 22:33:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:00.828 22:33:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:00.828 22:33:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134097' 00:26:00.828 22:33:59 -- common/autotest_common.sh@945 -- # kill 134097 00:26:00.828 22:33:59 -- common/autotest_common.sh@950 -- # wait 134097 00:26:00.828 [2024-07-14 22:33:59.769479] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:00.828 [2024-07-14 22:33:59.769537] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:00.828 [2024-07-14 22:33:59.769603] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:00.828 [2024-07-14 22:33:59.769614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:26:01.087 [2024-07-14 22:34:00.028437] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:02.022 22:34:00 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:02.022 00:26:02.022 real 0m21.822s 00:26:02.022 user 0m40.392s 00:26:02.022 sys 0m2.526s 00:26:02.022 22:34:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.022 22:34:00 -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 ************************************ 00:26:02.022 END TEST raid5f_superblock_test 00:26:02.022 ************************************ 00:26:02.022 22:34:00 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:02.022 22:34:00 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:26:02.022 22:34:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:02.022 22:34:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:02.022 22:34:00 -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 ************************************ 00:26:02.022 START TEST raid5f_rebuild_test 00:26:02.022 ************************************ 00:26:02.022 22:34:01 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=134803 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134803 /var/tmp/spdk-raid.sock 00:26:02.022 22:34:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:02.022 22:34:01 -- common/autotest_common.sh@819 -- # '[' -z 134803 ']' 00:26:02.022 22:34:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:02.022 22:34:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:02.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:02.022 22:34:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:02.022 22:34:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:02.022 22:34:01 -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 [2024-07-14 22:34:01.080535] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:02.022 [2024-07-14 22:34:01.081350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134803 ] 00:26:02.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:02.022 Zero copy mechanism will not be used. 00:26:02.280 [2024-07-14 22:34:01.251201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.538 [2024-07-14 22:34:01.449780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.796 [2024-07-14 22:34:01.612168] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:03.053 22:34:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:03.053 22:34:02 -- common/autotest_common.sh@852 -- # return 0 00:26:03.053 22:34:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:03.053 22:34:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:03.053 22:34:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:03.310 BaseBdev1 00:26:03.310 22:34:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:03.310 22:34:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:03.310 22:34:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:03.568 BaseBdev2 00:26:03.568 22:34:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:03.568 22:34:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:03.568 22:34:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:03.827 BaseBdev3 00:26:03.827 22:34:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:03.827 22:34:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:03.827 22:34:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:04.085 BaseBdev4 00:26:04.085 22:34:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:04.348 spare_malloc 00:26:04.348 22:34:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:04.608 spare_delay 00:26:04.608 22:34:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:04.865 [2024-07-14 22:34:03.785561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:04.865 [2024-07-14 22:34:03.785797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.865 [2024-07-14 22:34:03.785860] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:04.865 [2024-07-14 22:34:03.786033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.865 [2024-07-14 22:34:03.789013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.865 [2024-07-14 22:34:03.789214] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:04.865 spare 00:26:04.865 22:34:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:05.123 [2024-07-14 22:34:04.017713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.123 [2024-07-14 22:34:04.020056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.123 [2024-07-14 22:34:04.020254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:05.123 [2024-07-14 22:34:04.020329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:05.123 [2024-07-14 22:34:04.020508] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:26:05.123 [2024-07-14 22:34:04.020548] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:05.123 [2024-07-14 22:34:04.020869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:05.123 [2024-07-14 22:34:04.027930] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:26:05.123 [2024-07-14 22:34:04.028089] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:26:05.123 [2024-07-14 22:34:04.028487] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.123 22:34:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.381 22:34:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:05.381 "name": "raid_bdev1", 00:26:05.381 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:05.381 "strip_size_kb": 64, 00:26:05.381 "state": "online", 00:26:05.381 "raid_level": "raid5f", 00:26:05.381 "superblock": false, 00:26:05.381 "num_base_bdevs": 4, 00:26:05.381 "num_base_bdevs_discovered": 4, 00:26:05.381 "num_base_bdevs_operational": 4, 00:26:05.381 "base_bdevs_list": [ 00:26:05.381 { 00:26:05.381 "name": "BaseBdev1", 00:26:05.381 "uuid": "c46b27c0-ea84-454c-bb79-170fa4645c44", 00:26:05.381 "is_configured": true, 00:26:05.381 "data_offset": 0, 00:26:05.381 "data_size": 65536 00:26:05.381 }, 00:26:05.381 { 00:26:05.381 "name": "BaseBdev2", 00:26:05.381 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:05.381 "is_configured": true, 00:26:05.381 "data_offset": 0, 00:26:05.381 "data_size": 65536 00:26:05.381 }, 00:26:05.381 { 00:26:05.381 "name": "BaseBdev3", 00:26:05.381 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:05.381 "is_configured": true, 00:26:05.381 "data_offset": 0, 00:26:05.381 "data_size": 65536 00:26:05.381 }, 00:26:05.381 { 00:26:05.381 "name": "BaseBdev4", 00:26:05.381 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:05.381 "is_configured": true, 00:26:05.382 "data_offset": 0, 00:26:05.382 "data_size": 65536 00:26:05.382 } 00:26:05.382 ] 00:26:05.382 }' 00:26:05.382 22:34:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:05.382 22:34:04 -- common/autotest_common.sh@10 -- # set +x 00:26:06.317 22:34:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:06.317 22:34:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:06.317 [2024-07-14 22:34:05.325260] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:06.317 22:34:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:26:06.317 22:34:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:06.317 22:34:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.574 22:34:05 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:06.574 22:34:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:06.574 22:34:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:06.574 22:34:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@12 -- # local i 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:06.575 22:34:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:06.833 [2024-07-14 22:34:05.869282] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:07.092 /dev/nbd0 00:26:07.092 22:34:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:07.092 22:34:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:07.092 22:34:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:07.092 22:34:05 -- common/autotest_common.sh@857 -- # local i 00:26:07.092 22:34:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:07.092 22:34:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:07.092 22:34:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:07.092 22:34:05 -- common/autotest_common.sh@861 -- # break 00:26:07.092 22:34:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:07.092 22:34:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:07.092 22:34:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:07.092 1+0 records in 00:26:07.092 1+0 records out 00:26:07.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456636 s, 9.0 MB/s 00:26:07.092 22:34:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:07.092 22:34:05 -- common/autotest_common.sh@874 -- # size=4096 00:26:07.092 22:34:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:07.092 22:34:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:07.092 22:34:05 -- common/autotest_common.sh@877 -- # return 0 00:26:07.092 22:34:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:07.092 22:34:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:07.092 22:34:05 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:07.092 22:34:05 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:07.092 22:34:05 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:07.092 22:34:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:26:07.658 512+0 records in 00:26:07.658 512+0 records out 00:26:07.658 100663296 bytes (101 MB, 96 MiB) copied, 0.629715 s, 160 MB/s 00:26:07.658 22:34:06 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:07.658 22:34:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:07.658 22:34:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:07.658 22:34:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:07.658 22:34:06 -- bdev/nbd_common.sh@51 -- # local i 00:26:07.658 22:34:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:07.658 22:34:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:07.917 [2024-07-14 22:34:06.825301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@41 -- # break 00:26:07.917 22:34:06 -- bdev/nbd_common.sh@45 -- # return 0 00:26:07.917 22:34:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:08.176 [2024-07-14 22:34:07.201547] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.176 22:34:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.743 22:34:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:08.743 "name": "raid_bdev1", 00:26:08.743 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:08.743 "strip_size_kb": 64, 00:26:08.743 "state": "online", 00:26:08.743 "raid_level": "raid5f", 00:26:08.743 "superblock": false, 00:26:08.743 "num_base_bdevs": 4, 00:26:08.743 "num_base_bdevs_discovered": 3, 00:26:08.743 "num_base_bdevs_operational": 3, 00:26:08.743 "base_bdevs_list": [ 00:26:08.743 { 00:26:08.743 "name": null, 00:26:08.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.743 "is_configured": false, 00:26:08.743 "data_offset": 0, 00:26:08.743 "data_size": 65536 00:26:08.743 }, 00:26:08.743 { 00:26:08.743 "name": "BaseBdev2", 00:26:08.743 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:08.743 "is_configured": true, 00:26:08.743 "data_offset": 0, 00:26:08.743 "data_size": 65536 00:26:08.743 }, 00:26:08.743 { 00:26:08.743 "name": "BaseBdev3", 00:26:08.743 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:08.743 "is_configured": true, 00:26:08.743 "data_offset": 0, 00:26:08.743 "data_size": 65536 00:26:08.743 }, 00:26:08.743 { 00:26:08.743 "name": "BaseBdev4", 00:26:08.743 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:08.743 "is_configured": true, 00:26:08.743 "data_offset": 0, 00:26:08.743 "data_size": 65536 00:26:08.743 } 00:26:08.743 ] 00:26:08.743 }' 00:26:08.743 22:34:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:08.743 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:26:09.309 22:34:08 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:09.567 [2024-07-14 22:34:08.498023] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:09.567 [2024-07-14 22:34:08.498089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:09.567 [2024-07-14 22:34:08.508604] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:26:09.567 [2024-07-14 22:34:08.515407] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:09.567 22:34:08 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.504 22:34:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.763 22:34:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:10.763 "name": "raid_bdev1", 00:26:10.763 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:10.763 "strip_size_kb": 64, 00:26:10.763 "state": "online", 00:26:10.763 "raid_level": "raid5f", 00:26:10.763 "superblock": false, 00:26:10.763 "num_base_bdevs": 4, 00:26:10.763 "num_base_bdevs_discovered": 4, 00:26:10.763 "num_base_bdevs_operational": 4, 00:26:10.763 "process": { 00:26:10.763 "type": "rebuild", 00:26:10.763 "target": "spare", 00:26:10.763 "progress": { 00:26:10.763 "blocks": 23040, 00:26:10.763 "percent": 11 00:26:10.763 } 00:26:10.763 }, 00:26:10.763 "base_bdevs_list": [ 00:26:10.763 { 00:26:10.763 "name": "spare", 00:26:10.763 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:10.763 "is_configured": true, 00:26:10.763 "data_offset": 0, 00:26:10.763 "data_size": 65536 00:26:10.763 }, 00:26:10.763 { 00:26:10.763 "name": "BaseBdev2", 00:26:10.763 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:10.763 "is_configured": true, 00:26:10.763 "data_offset": 0, 00:26:10.763 "data_size": 65536 00:26:10.763 }, 00:26:10.763 { 00:26:10.763 "name": "BaseBdev3", 00:26:10.763 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:10.763 "is_configured": true, 00:26:10.763 "data_offset": 0, 00:26:10.763 "data_size": 65536 00:26:10.763 }, 00:26:10.763 { 00:26:10.763 "name": "BaseBdev4", 00:26:10.763 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:10.763 "is_configured": true, 00:26:10.763 "data_offset": 0, 00:26:10.763 "data_size": 65536 00:26:10.763 } 00:26:10.763 ] 00:26:10.763 }' 00:26:10.763 22:34:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:10.763 22:34:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:10.763 22:34:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.021 22:34:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.022 22:34:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:11.280 [2024-07-14 22:34:10.104547] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:11.280 [2024-07-14 22:34:10.125779] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:11.280 [2024-07-14 22:34:10.125916] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.280 22:34:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.539 22:34:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:11.539 "name": "raid_bdev1", 00:26:11.539 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:11.539 "strip_size_kb": 64, 00:26:11.539 "state": "online", 00:26:11.539 "raid_level": "raid5f", 00:26:11.539 "superblock": false, 00:26:11.539 "num_base_bdevs": 4, 00:26:11.539 "num_base_bdevs_discovered": 3, 00:26:11.539 "num_base_bdevs_operational": 3, 00:26:11.539 "base_bdevs_list": [ 00:26:11.539 { 00:26:11.539 "name": null, 00:26:11.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.539 "is_configured": false, 00:26:11.539 "data_offset": 0, 00:26:11.539 "data_size": 65536 00:26:11.539 }, 00:26:11.539 { 00:26:11.539 "name": "BaseBdev2", 00:26:11.539 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:11.539 "is_configured": true, 00:26:11.539 "data_offset": 0, 00:26:11.539 "data_size": 65536 00:26:11.539 }, 00:26:11.539 { 00:26:11.539 "name": "BaseBdev3", 00:26:11.539 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:11.539 "is_configured": true, 00:26:11.539 "data_offset": 0, 00:26:11.539 "data_size": 65536 00:26:11.539 }, 00:26:11.539 { 00:26:11.539 "name": "BaseBdev4", 00:26:11.539 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:11.539 "is_configured": true, 00:26:11.539 "data_offset": 0, 00:26:11.539 "data_size": 65536 00:26:11.539 } 00:26:11.539 ] 00:26:11.539 }' 00:26:11.539 22:34:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:11.539 22:34:10 -- common/autotest_common.sh@10 -- # set +x 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.106 22:34:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.365 22:34:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:12.365 "name": "raid_bdev1", 00:26:12.365 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:12.365 "strip_size_kb": 64, 00:26:12.365 "state": "online", 00:26:12.365 "raid_level": "raid5f", 00:26:12.365 "superblock": false, 00:26:12.365 "num_base_bdevs": 4, 00:26:12.365 "num_base_bdevs_discovered": 3, 00:26:12.365 "num_base_bdevs_operational": 3, 00:26:12.365 "base_bdevs_list": [ 00:26:12.365 { 00:26:12.365 "name": null, 00:26:12.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.365 "is_configured": false, 00:26:12.365 "data_offset": 0, 00:26:12.365 "data_size": 65536 00:26:12.365 }, 00:26:12.365 { 00:26:12.365 "name": "BaseBdev2", 00:26:12.365 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:12.365 "is_configured": true, 00:26:12.365 "data_offset": 0, 00:26:12.365 "data_size": 65536 00:26:12.365 }, 00:26:12.365 { 00:26:12.365 "name": "BaseBdev3", 00:26:12.365 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:12.365 "is_configured": true, 00:26:12.365 "data_offset": 0, 00:26:12.365 "data_size": 65536 00:26:12.365 }, 00:26:12.365 { 00:26:12.365 "name": "BaseBdev4", 00:26:12.365 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:12.365 "is_configured": true, 00:26:12.365 "data_offset": 0, 00:26:12.365 "data_size": 65536 00:26:12.365 } 00:26:12.365 ] 00:26:12.365 }' 00:26:12.365 22:34:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:12.365 22:34:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:12.365 22:34:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:12.365 22:34:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:12.365 22:34:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:12.623 [2024-07-14 22:34:11.588853] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:12.623 [2024-07-14 22:34:11.588893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:12.623 [2024-07-14 22:34:11.599568] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:26:12.623 [2024-07-14 22:34:11.606975] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:12.623 22:34:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.558 22:34:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.817 22:34:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:13.817 "name": "raid_bdev1", 00:26:13.817 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:13.817 "strip_size_kb": 64, 00:26:13.817 "state": "online", 00:26:13.817 "raid_level": "raid5f", 00:26:13.817 "superblock": false, 00:26:13.817 "num_base_bdevs": 4, 00:26:13.817 "num_base_bdevs_discovered": 4, 00:26:13.817 "num_base_bdevs_operational": 4, 00:26:13.817 "process": { 00:26:13.817 "type": "rebuild", 00:26:13.817 "target": "spare", 00:26:13.817 "progress": { 00:26:13.817 "blocks": 23040, 00:26:13.817 "percent": 11 00:26:13.817 } 00:26:13.817 }, 00:26:13.817 "base_bdevs_list": [ 00:26:13.817 { 00:26:13.817 "name": "spare", 00:26:13.817 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:13.817 "is_configured": true, 00:26:13.817 "data_offset": 0, 00:26:13.817 "data_size": 65536 00:26:13.817 }, 00:26:13.817 { 00:26:13.817 "name": "BaseBdev2", 00:26:13.817 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:13.817 "is_configured": true, 00:26:13.817 "data_offset": 0, 00:26:13.817 "data_size": 65536 00:26:13.817 }, 00:26:13.817 { 00:26:13.817 "name": "BaseBdev3", 00:26:13.817 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:13.817 "is_configured": true, 00:26:13.817 "data_offset": 0, 00:26:13.817 "data_size": 65536 00:26:13.817 }, 00:26:13.817 { 00:26:13.817 "name": "BaseBdev4", 00:26:13.817 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:13.817 "is_configured": true, 00:26:13.817 "data_offset": 0, 00:26:13.817 "data_size": 65536 00:26:13.817 } 00:26:13.817 ] 00:26:13.817 }' 00:26:13.817 22:34:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@657 -- # local timeout=709 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.075 22:34:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.346 22:34:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:14.346 "name": "raid_bdev1", 00:26:14.346 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:14.346 "strip_size_kb": 64, 00:26:14.346 "state": "online", 00:26:14.346 "raid_level": "raid5f", 00:26:14.346 "superblock": false, 00:26:14.346 "num_base_bdevs": 4, 00:26:14.346 "num_base_bdevs_discovered": 4, 00:26:14.346 "num_base_bdevs_operational": 4, 00:26:14.346 "process": { 00:26:14.346 "type": "rebuild", 00:26:14.346 "target": "spare", 00:26:14.346 "progress": { 00:26:14.346 "blocks": 28800, 00:26:14.346 "percent": 14 00:26:14.346 } 00:26:14.346 }, 00:26:14.346 "base_bdevs_list": [ 00:26:14.346 { 00:26:14.346 "name": "spare", 00:26:14.347 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:14.347 "is_configured": true, 00:26:14.347 "data_offset": 0, 00:26:14.347 "data_size": 65536 00:26:14.347 }, 00:26:14.347 { 00:26:14.347 "name": "BaseBdev2", 00:26:14.347 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:14.347 "is_configured": true, 00:26:14.347 "data_offset": 0, 00:26:14.347 "data_size": 65536 00:26:14.347 }, 00:26:14.347 { 00:26:14.347 "name": "BaseBdev3", 00:26:14.347 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:14.347 "is_configured": true, 00:26:14.347 "data_offset": 0, 00:26:14.347 "data_size": 65536 00:26:14.347 }, 00:26:14.347 { 00:26:14.347 "name": "BaseBdev4", 00:26:14.347 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:14.347 "is_configured": true, 00:26:14.347 "data_offset": 0, 00:26:14.347 "data_size": 65536 00:26:14.347 } 00:26:14.347 ] 00:26:14.347 }' 00:26:14.347 22:34:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:14.347 22:34:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:14.347 22:34:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:14.347 22:34:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:14.347 22:34:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.311 22:34:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.570 22:34:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.570 "name": "raid_bdev1", 00:26:15.570 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:15.570 "strip_size_kb": 64, 00:26:15.570 "state": "online", 00:26:15.570 "raid_level": "raid5f", 00:26:15.570 "superblock": false, 00:26:15.570 "num_base_bdevs": 4, 00:26:15.570 "num_base_bdevs_discovered": 4, 00:26:15.570 "num_base_bdevs_operational": 4, 00:26:15.570 "process": { 00:26:15.570 "type": "rebuild", 00:26:15.570 "target": "spare", 00:26:15.570 "progress": { 00:26:15.570 "blocks": 55680, 00:26:15.570 "percent": 28 00:26:15.570 } 00:26:15.570 }, 00:26:15.570 "base_bdevs_list": [ 00:26:15.570 { 00:26:15.570 "name": "spare", 00:26:15.570 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:15.570 "is_configured": true, 00:26:15.570 "data_offset": 0, 00:26:15.570 "data_size": 65536 00:26:15.570 }, 00:26:15.570 { 00:26:15.570 "name": "BaseBdev2", 00:26:15.570 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:15.570 "is_configured": true, 00:26:15.570 "data_offset": 0, 00:26:15.570 "data_size": 65536 00:26:15.570 }, 00:26:15.570 { 00:26:15.570 "name": "BaseBdev3", 00:26:15.570 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:15.570 "is_configured": true, 00:26:15.570 "data_offset": 0, 00:26:15.570 "data_size": 65536 00:26:15.570 }, 00:26:15.570 { 00:26:15.570 "name": "BaseBdev4", 00:26:15.570 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:15.570 "is_configured": true, 00:26:15.570 "data_offset": 0, 00:26:15.570 "data_size": 65536 00:26:15.570 } 00:26:15.570 ] 00:26:15.570 }' 00:26:15.570 22:34:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.570 22:34:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.570 22:34:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.828 22:34:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.828 22:34:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.761 22:34:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.020 22:34:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:17.020 "name": "raid_bdev1", 00:26:17.020 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:17.020 "strip_size_kb": 64, 00:26:17.020 "state": "online", 00:26:17.020 "raid_level": "raid5f", 00:26:17.020 "superblock": false, 00:26:17.020 "num_base_bdevs": 4, 00:26:17.020 "num_base_bdevs_discovered": 4, 00:26:17.020 "num_base_bdevs_operational": 4, 00:26:17.020 "process": { 00:26:17.020 "type": "rebuild", 00:26:17.020 "target": "spare", 00:26:17.020 "progress": { 00:26:17.020 "blocks": 80640, 00:26:17.020 "percent": 41 00:26:17.020 } 00:26:17.020 }, 00:26:17.020 "base_bdevs_list": [ 00:26:17.020 { 00:26:17.020 "name": "spare", 00:26:17.020 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:17.020 "is_configured": true, 00:26:17.020 "data_offset": 0, 00:26:17.020 "data_size": 65536 00:26:17.020 }, 00:26:17.020 { 00:26:17.020 "name": "BaseBdev2", 00:26:17.020 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:17.020 "is_configured": true, 00:26:17.020 "data_offset": 0, 00:26:17.020 "data_size": 65536 00:26:17.020 }, 00:26:17.020 { 00:26:17.020 "name": "BaseBdev3", 00:26:17.020 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:17.020 "is_configured": true, 00:26:17.020 "data_offset": 0, 00:26:17.020 "data_size": 65536 00:26:17.020 }, 00:26:17.020 { 00:26:17.020 "name": "BaseBdev4", 00:26:17.020 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:17.020 "is_configured": true, 00:26:17.020 "data_offset": 0, 00:26:17.020 "data_size": 65536 00:26:17.020 } 00:26:17.020 ] 00:26:17.020 }' 00:26:17.020 22:34:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:17.020 22:34:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:17.020 22:34:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:17.020 22:34:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:17.020 22:34:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.956 22:34:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.215 22:34:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:18.215 "name": "raid_bdev1", 00:26:18.215 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:18.215 "strip_size_kb": 64, 00:26:18.215 "state": "online", 00:26:18.215 "raid_level": "raid5f", 00:26:18.215 "superblock": false, 00:26:18.215 "num_base_bdevs": 4, 00:26:18.215 "num_base_bdevs_discovered": 4, 00:26:18.215 "num_base_bdevs_operational": 4, 00:26:18.215 "process": { 00:26:18.215 "type": "rebuild", 00:26:18.215 "target": "spare", 00:26:18.215 "progress": { 00:26:18.215 "blocks": 107520, 00:26:18.215 "percent": 54 00:26:18.215 } 00:26:18.215 }, 00:26:18.215 "base_bdevs_list": [ 00:26:18.215 { 00:26:18.215 "name": "spare", 00:26:18.215 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:18.215 "is_configured": true, 00:26:18.215 "data_offset": 0, 00:26:18.215 "data_size": 65536 00:26:18.215 }, 00:26:18.215 { 00:26:18.215 "name": "BaseBdev2", 00:26:18.215 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:18.215 "is_configured": true, 00:26:18.215 "data_offset": 0, 00:26:18.215 "data_size": 65536 00:26:18.215 }, 00:26:18.215 { 00:26:18.215 "name": "BaseBdev3", 00:26:18.215 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:18.215 "is_configured": true, 00:26:18.215 "data_offset": 0, 00:26:18.215 "data_size": 65536 00:26:18.215 }, 00:26:18.215 { 00:26:18.215 "name": "BaseBdev4", 00:26:18.215 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:18.215 "is_configured": true, 00:26:18.215 "data_offset": 0, 00:26:18.215 "data_size": 65536 00:26:18.215 } 00:26:18.215 ] 00:26:18.215 }' 00:26:18.215 22:34:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:18.474 22:34:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.474 22:34:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:18.474 22:34:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.474 22:34:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.409 22:34:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.668 22:34:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:19.668 "name": "raid_bdev1", 00:26:19.668 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:19.668 "strip_size_kb": 64, 00:26:19.668 "state": "online", 00:26:19.668 "raid_level": "raid5f", 00:26:19.668 "superblock": false, 00:26:19.668 "num_base_bdevs": 4, 00:26:19.668 "num_base_bdevs_discovered": 4, 00:26:19.668 "num_base_bdevs_operational": 4, 00:26:19.668 "process": { 00:26:19.668 "type": "rebuild", 00:26:19.668 "target": "spare", 00:26:19.668 "progress": { 00:26:19.668 "blocks": 132480, 00:26:19.668 "percent": 67 00:26:19.668 } 00:26:19.668 }, 00:26:19.668 "base_bdevs_list": [ 00:26:19.668 { 00:26:19.668 "name": "spare", 00:26:19.668 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:19.668 "is_configured": true, 00:26:19.668 "data_offset": 0, 00:26:19.668 "data_size": 65536 00:26:19.668 }, 00:26:19.668 { 00:26:19.668 "name": "BaseBdev2", 00:26:19.668 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:19.668 "is_configured": true, 00:26:19.668 "data_offset": 0, 00:26:19.668 "data_size": 65536 00:26:19.668 }, 00:26:19.668 { 00:26:19.668 "name": "BaseBdev3", 00:26:19.668 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:19.668 "is_configured": true, 00:26:19.668 "data_offset": 0, 00:26:19.668 "data_size": 65536 00:26:19.668 }, 00:26:19.668 { 00:26:19.668 "name": "BaseBdev4", 00:26:19.668 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:19.668 "is_configured": true, 00:26:19.668 "data_offset": 0, 00:26:19.668 "data_size": 65536 00:26:19.668 } 00:26:19.668 ] 00:26:19.668 }' 00:26:19.668 22:34:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:19.668 22:34:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.668 22:34:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:19.926 22:34:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.926 22:34:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:20.861 22:34:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:20.861 22:34:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.861 22:34:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:20.861 22:34:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:20.862 22:34:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:20.862 22:34:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:20.862 22:34:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.862 22:34:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.119 22:34:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:21.119 "name": "raid_bdev1", 00:26:21.119 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:21.119 "strip_size_kb": 64, 00:26:21.119 "state": "online", 00:26:21.119 "raid_level": "raid5f", 00:26:21.119 "superblock": false, 00:26:21.119 "num_base_bdevs": 4, 00:26:21.119 "num_base_bdevs_discovered": 4, 00:26:21.119 "num_base_bdevs_operational": 4, 00:26:21.119 "process": { 00:26:21.119 "type": "rebuild", 00:26:21.119 "target": "spare", 00:26:21.119 "progress": { 00:26:21.119 "blocks": 159360, 00:26:21.119 "percent": 81 00:26:21.119 } 00:26:21.119 }, 00:26:21.119 "base_bdevs_list": [ 00:26:21.119 { 00:26:21.119 "name": "spare", 00:26:21.119 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:21.119 "is_configured": true, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 65536 00:26:21.119 }, 00:26:21.119 { 00:26:21.119 "name": "BaseBdev2", 00:26:21.119 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:21.119 "is_configured": true, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 65536 00:26:21.119 }, 00:26:21.119 { 00:26:21.119 "name": "BaseBdev3", 00:26:21.119 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:21.119 "is_configured": true, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 65536 00:26:21.119 }, 00:26:21.119 { 00:26:21.119 "name": "BaseBdev4", 00:26:21.119 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:21.119 "is_configured": true, 00:26:21.119 "data_offset": 0, 00:26:21.119 "data_size": 65536 00:26:21.119 } 00:26:21.119 ] 00:26:21.119 }' 00:26:21.119 22:34:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:21.119 22:34:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:21.119 22:34:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.119 22:34:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:21.119 22:34:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:22.518 "name": "raid_bdev1", 00:26:22.518 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:22.518 "strip_size_kb": 64, 00:26:22.518 "state": "online", 00:26:22.518 "raid_level": "raid5f", 00:26:22.518 "superblock": false, 00:26:22.518 "num_base_bdevs": 4, 00:26:22.518 "num_base_bdevs_discovered": 4, 00:26:22.518 "num_base_bdevs_operational": 4, 00:26:22.518 "process": { 00:26:22.518 "type": "rebuild", 00:26:22.518 "target": "spare", 00:26:22.518 "progress": { 00:26:22.518 "blocks": 186240, 00:26:22.518 "percent": 94 00:26:22.518 } 00:26:22.518 }, 00:26:22.518 "base_bdevs_list": [ 00:26:22.518 { 00:26:22.518 "name": "spare", 00:26:22.518 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:22.518 "is_configured": true, 00:26:22.518 "data_offset": 0, 00:26:22.518 "data_size": 65536 00:26:22.518 }, 00:26:22.518 { 00:26:22.518 "name": "BaseBdev2", 00:26:22.518 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:22.518 "is_configured": true, 00:26:22.518 "data_offset": 0, 00:26:22.518 "data_size": 65536 00:26:22.518 }, 00:26:22.518 { 00:26:22.518 "name": "BaseBdev3", 00:26:22.518 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:22.518 "is_configured": true, 00:26:22.518 "data_offset": 0, 00:26:22.518 "data_size": 65536 00:26:22.518 }, 00:26:22.518 { 00:26:22.518 "name": "BaseBdev4", 00:26:22.518 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:22.518 "is_configured": true, 00:26:22.518 "data_offset": 0, 00:26:22.518 "data_size": 65536 00:26:22.518 } 00:26:22.518 ] 00:26:22.518 }' 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:22.518 22:34:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:23.083 [2024-07-14 22:34:21.971594] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:23.083 [2024-07-14 22:34:21.971690] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:23.083 [2024-07-14 22:34:21.971778] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.650 22:34:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.909 22:34:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:23.909 "name": "raid_bdev1", 00:26:23.909 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:23.909 "strip_size_kb": 64, 00:26:23.909 "state": "online", 00:26:23.909 "raid_level": "raid5f", 00:26:23.909 "superblock": false, 00:26:23.909 "num_base_bdevs": 4, 00:26:23.909 "num_base_bdevs_discovered": 4, 00:26:23.909 "num_base_bdevs_operational": 4, 00:26:23.909 "base_bdevs_list": [ 00:26:23.909 { 00:26:23.909 "name": "spare", 00:26:23.909 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:23.909 "is_configured": true, 00:26:23.909 "data_offset": 0, 00:26:23.909 "data_size": 65536 00:26:23.909 }, 00:26:23.909 { 00:26:23.909 "name": "BaseBdev2", 00:26:23.909 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:23.909 "is_configured": true, 00:26:23.909 "data_offset": 0, 00:26:23.909 "data_size": 65536 00:26:23.910 }, 00:26:23.910 { 00:26:23.910 "name": "BaseBdev3", 00:26:23.910 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:23.910 "is_configured": true, 00:26:23.910 "data_offset": 0, 00:26:23.910 "data_size": 65536 00:26:23.910 }, 00:26:23.910 { 00:26:23.910 "name": "BaseBdev4", 00:26:23.910 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:23.910 "is_configured": true, 00:26:23.910 "data_offset": 0, 00:26:23.910 "data_size": 65536 00:26:23.910 } 00:26:23.910 ] 00:26:23.910 }' 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@660 -- # break 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.910 22:34:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.169 22:34:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:24.169 "name": "raid_bdev1", 00:26:24.169 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:24.169 "strip_size_kb": 64, 00:26:24.169 "state": "online", 00:26:24.169 "raid_level": "raid5f", 00:26:24.169 "superblock": false, 00:26:24.169 "num_base_bdevs": 4, 00:26:24.169 "num_base_bdevs_discovered": 4, 00:26:24.169 "num_base_bdevs_operational": 4, 00:26:24.169 "base_bdevs_list": [ 00:26:24.169 { 00:26:24.169 "name": "spare", 00:26:24.169 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 }, 00:26:24.169 { 00:26:24.169 "name": "BaseBdev2", 00:26:24.169 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 }, 00:26:24.169 { 00:26:24.169 "name": "BaseBdev3", 00:26:24.169 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 }, 00:26:24.169 { 00:26:24.169 "name": "BaseBdev4", 00:26:24.169 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 } 00:26:24.169 ] 00:26:24.169 }' 00:26:24.169 22:34:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:24.169 22:34:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:24.169 22:34:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:24.426 22:34:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:24.427 22:34:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:24.427 22:34:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:24.427 22:34:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:24.427 22:34:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:24.427 22:34:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.427 22:34:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.685 22:34:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:24.685 "name": "raid_bdev1", 00:26:24.685 "uuid": "41b09c1a-6542-42bc-b14a-99486f44714f", 00:26:24.685 "strip_size_kb": 64, 00:26:24.685 "state": "online", 00:26:24.685 "raid_level": "raid5f", 00:26:24.685 "superblock": false, 00:26:24.685 "num_base_bdevs": 4, 00:26:24.685 "num_base_bdevs_discovered": 4, 00:26:24.685 "num_base_bdevs_operational": 4, 00:26:24.685 "base_bdevs_list": [ 00:26:24.685 { 00:26:24.685 "name": "spare", 00:26:24.685 "uuid": "8e67b9a3-7e46-54e2-8e12-a3f7b9fc0836", 00:26:24.685 "is_configured": true, 00:26:24.685 "data_offset": 0, 00:26:24.685 "data_size": 65536 00:26:24.685 }, 00:26:24.685 { 00:26:24.685 "name": "BaseBdev2", 00:26:24.685 "uuid": "97191178-f78d-4764-946c-eafff7fbf6a2", 00:26:24.685 "is_configured": true, 00:26:24.685 "data_offset": 0, 00:26:24.685 "data_size": 65536 00:26:24.685 }, 00:26:24.685 { 00:26:24.685 "name": "BaseBdev3", 00:26:24.685 "uuid": "50f6e9ad-ae40-445c-a2ba-87803a189fef", 00:26:24.685 "is_configured": true, 00:26:24.685 "data_offset": 0, 00:26:24.685 "data_size": 65536 00:26:24.685 }, 00:26:24.685 { 00:26:24.685 "name": "BaseBdev4", 00:26:24.685 "uuid": "09b180a8-1e2e-4abf-9f4c-7f216c2e04aa", 00:26:24.685 "is_configured": true, 00:26:24.685 "data_offset": 0, 00:26:24.685 "data_size": 65536 00:26:24.685 } 00:26:24.685 ] 00:26:24.685 }' 00:26:24.685 22:34:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:24.685 22:34:23 -- common/autotest_common.sh@10 -- # set +x 00:26:25.251 22:34:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:25.510 [2024-07-14 22:34:24.350634] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:25.510 [2024-07-14 22:34:24.350664] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:25.510 [2024-07-14 22:34:24.350743] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:25.510 [2024-07-14 22:34:24.350825] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:25.510 [2024-07-14 22:34:24.350838] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:26:25.510 22:34:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.510 22:34:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:25.768 22:34:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:25.768 22:34:24 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:25.768 22:34:24 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@12 -- # local i 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:25.768 22:34:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:26.024 /dev/nbd0 00:26:26.025 22:34:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:26.025 22:34:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:26.025 22:34:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:26.025 22:34:24 -- common/autotest_common.sh@857 -- # local i 00:26:26.025 22:34:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:26.025 22:34:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:26.025 22:34:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:26.025 22:34:24 -- common/autotest_common.sh@861 -- # break 00:26:26.025 22:34:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:26.025 22:34:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:26.025 22:34:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:26.025 1+0 records in 00:26:26.025 1+0 records out 00:26:26.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552669 s, 7.4 MB/s 00:26:26.025 22:34:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:26.025 22:34:24 -- common/autotest_common.sh@874 -- # size=4096 00:26:26.025 22:34:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:26.025 22:34:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:26.025 22:34:24 -- common/autotest_common.sh@877 -- # return 0 00:26:26.025 22:34:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:26.025 22:34:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:26.025 22:34:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:26.281 /dev/nbd1 00:26:26.281 22:34:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:26.281 22:34:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:26.281 22:34:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:26.281 22:34:25 -- common/autotest_common.sh@857 -- # local i 00:26:26.281 22:34:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:26.281 22:34:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:26.281 22:34:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:26.281 22:34:25 -- common/autotest_common.sh@861 -- # break 00:26:26.281 22:34:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:26.281 22:34:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:26.281 22:34:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:26.281 1+0 records in 00:26:26.281 1+0 records out 00:26:26.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452914 s, 9.0 MB/s 00:26:26.281 22:34:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:26.281 22:34:25 -- common/autotest_common.sh@874 -- # size=4096 00:26:26.281 22:34:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:26.281 22:34:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:26.281 22:34:25 -- common/autotest_common.sh@877 -- # return 0 00:26:26.281 22:34:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:26.281 22:34:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:26.281 22:34:25 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:26.537 22:34:25 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:26.537 22:34:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:26.537 22:34:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:26.537 22:34:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:26.537 22:34:25 -- bdev/nbd_common.sh@51 -- # local i 00:26:26.537 22:34:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:26.537 22:34:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:26.794 22:34:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:26.794 22:34:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@41 -- # break 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@45 -- # return 0 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:26.795 22:34:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:27.052 22:34:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:27.052 22:34:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:27.052 22:34:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:27.052 22:34:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:27.052 22:34:26 -- bdev/nbd_common.sh@41 -- # break 00:26:27.052 22:34:26 -- bdev/nbd_common.sh@45 -- # return 0 00:26:27.052 22:34:26 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:26:27.053 22:34:26 -- bdev/bdev_raid.sh@709 -- # killprocess 134803 00:26:27.053 22:34:26 -- common/autotest_common.sh@926 -- # '[' -z 134803 ']' 00:26:27.053 22:34:26 -- common/autotest_common.sh@930 -- # kill -0 134803 00:26:27.053 22:34:26 -- common/autotest_common.sh@931 -- # uname 00:26:27.053 22:34:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:27.053 22:34:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134803 00:26:27.053 killing process with pid 134803 00:26:27.053 Received shutdown signal, test time was about 60.000000 seconds 00:26:27.053 00:26:27.053 Latency(us) 00:26:27.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.053 =================================================================================================================== 00:26:27.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:27.053 22:34:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:27.053 22:34:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:27.053 22:34:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134803' 00:26:27.053 22:34:26 -- common/autotest_common.sh@945 -- # kill 134803 00:26:27.053 22:34:26 -- common/autotest_common.sh@950 -- # wait 134803 00:26:27.053 [2024-07-14 22:34:26.109708] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:27.619 [2024-07-14 22:34:26.424928] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:28.554 ************************************ 00:26:28.554 END TEST raid5f_rebuild_test 00:26:28.554 ************************************ 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:28.554 00:26:28.554 real 0m26.336s 00:26:28.554 user 0m38.775s 00:26:28.554 sys 0m3.021s 00:26:28.554 22:34:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.554 22:34:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:26:28.554 22:34:27 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:28.554 22:34:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.554 22:34:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.554 ************************************ 00:26:28.554 START TEST raid5f_rebuild_test_sb 00:26:28.554 ************************************ 00:26:28.554 22:34:27 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@544 -- # raid_pid=135458 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135458 /var/tmp/spdk-raid.sock 00:26:28.554 22:34:27 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:28.554 22:34:27 -- common/autotest_common.sh@819 -- # '[' -z 135458 ']' 00:26:28.554 22:34:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:28.554 22:34:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:28.554 22:34:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:28.554 22:34:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.554 22:34:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.554 [2024-07-14 22:34:27.467292] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:28.554 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:28.554 Zero copy mechanism will not be used. 00:26:28.554 [2024-07-14 22:34:27.467462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135458 ] 00:26:28.554 [2024-07-14 22:34:27.621871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.812 [2024-07-14 22:34:27.781581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.071 [2024-07-14 22:34:27.957395] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:29.329 22:34:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.329 22:34:28 -- common/autotest_common.sh@852 -- # return 0 00:26:29.329 22:34:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:29.329 22:34:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:29.329 22:34:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:29.587 BaseBdev1_malloc 00:26:29.587 22:34:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:29.846 [2024-07-14 22:34:28.860841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:29.846 [2024-07-14 22:34:28.861004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.846 [2024-07-14 22:34:28.861041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:29.846 [2024-07-14 22:34:28.861086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.846 [2024-07-14 22:34:28.863448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.846 [2024-07-14 22:34:28.863501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:29.846 BaseBdev1 00:26:29.846 22:34:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:29.846 22:34:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:29.846 22:34:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:30.104 BaseBdev2_malloc 00:26:30.104 22:34:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:30.363 [2024-07-14 22:34:29.293992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:30.363 [2024-07-14 22:34:29.294081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.363 [2024-07-14 22:34:29.294122] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:30.363 [2024-07-14 22:34:29.294196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.363 [2024-07-14 22:34:29.296188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.363 [2024-07-14 22:34:29.296233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:30.363 BaseBdev2 00:26:30.363 22:34:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:30.363 22:34:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:30.363 22:34:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:30.622 BaseBdev3_malloc 00:26:30.622 22:34:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:30.880 [2024-07-14 22:34:29.756868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:30.880 [2024-07-14 22:34:29.756955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.880 [2024-07-14 22:34:29.756994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:30.880 [2024-07-14 22:34:29.757034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.880 [2024-07-14 22:34:29.758929] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.880 [2024-07-14 22:34:29.758978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:30.880 BaseBdev3 00:26:30.880 22:34:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:30.880 22:34:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:30.880 22:34:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:31.139 BaseBdev4_malloc 00:26:31.139 22:34:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:31.139 [2024-07-14 22:34:30.149487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:31.139 [2024-07-14 22:34:30.149580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.139 [2024-07-14 22:34:30.149614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:31.139 [2024-07-14 22:34:30.149654] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.139 [2024-07-14 22:34:30.151527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.139 [2024-07-14 22:34:30.151574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:31.139 BaseBdev4 00:26:31.139 22:34:30 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:31.397 spare_malloc 00:26:31.397 22:34:30 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:31.657 spare_delay 00:26:31.657 22:34:30 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:31.936 [2024-07-14 22:34:30.780161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:31.936 [2024-07-14 22:34:30.780285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.936 [2024-07-14 22:34:30.780318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:31.936 [2024-07-14 22:34:30.780360] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.936 [2024-07-14 22:34:30.782567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.936 [2024-07-14 22:34:30.782642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:31.936 spare 00:26:31.936 22:34:30 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:32.202 [2024-07-14 22:34:31.032348] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:32.202 [2024-07-14 22:34:31.034131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:32.202 [2024-07-14 22:34:31.034257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:32.202 [2024-07-14 22:34:31.034313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:32.202 [2024-07-14 22:34:31.034605] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:26:32.202 [2024-07-14 22:34:31.034645] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:32.202 [2024-07-14 22:34:31.034784] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:32.202 [2024-07-14 22:34:31.040895] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:26:32.202 [2024-07-14 22:34:31.040921] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:26:32.202 [2024-07-14 22:34:31.041115] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.202 22:34:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:32.202 "name": "raid_bdev1", 00:26:32.202 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:32.202 "strip_size_kb": 64, 00:26:32.202 "state": "online", 00:26:32.202 "raid_level": "raid5f", 00:26:32.202 "superblock": true, 00:26:32.202 "num_base_bdevs": 4, 00:26:32.202 "num_base_bdevs_discovered": 4, 00:26:32.202 "num_base_bdevs_operational": 4, 00:26:32.202 "base_bdevs_list": [ 00:26:32.202 { 00:26:32.202 "name": "BaseBdev1", 00:26:32.202 "uuid": "5676f2d3-cda6-52e7-8bc1-0c8de811e614", 00:26:32.202 "is_configured": true, 00:26:32.202 "data_offset": 2048, 00:26:32.202 "data_size": 63488 00:26:32.202 }, 00:26:32.202 { 00:26:32.202 "name": "BaseBdev2", 00:26:32.202 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:32.202 "is_configured": true, 00:26:32.202 "data_offset": 2048, 00:26:32.202 "data_size": 63488 00:26:32.202 }, 00:26:32.202 { 00:26:32.202 "name": "BaseBdev3", 00:26:32.202 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:32.202 "is_configured": true, 00:26:32.203 "data_offset": 2048, 00:26:32.203 "data_size": 63488 00:26:32.203 }, 00:26:32.203 { 00:26:32.203 "name": "BaseBdev4", 00:26:32.203 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:32.203 "is_configured": true, 00:26:32.203 "data_offset": 2048, 00:26:32.203 "data_size": 63488 00:26:32.203 } 00:26:32.203 ] 00:26:32.203 }' 00:26:32.203 22:34:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:32.203 22:34:31 -- common/autotest_common.sh@10 -- # set +x 00:26:33.136 22:34:31 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:33.136 22:34:31 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:33.136 [2024-07-14 22:34:32.188375] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:33.136 22:34:32 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:26:33.136 22:34:32 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.136 22:34:32 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:33.394 22:34:32 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:26:33.394 22:34:32 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:33.394 22:34:32 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:33.394 22:34:32 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@12 -- # local i 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:33.394 22:34:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:33.661 [2024-07-14 22:34:32.664431] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:33.661 /dev/nbd0 00:26:33.661 22:34:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:33.661 22:34:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:33.661 22:34:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:33.661 22:34:32 -- common/autotest_common.sh@857 -- # local i 00:26:33.661 22:34:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:33.661 22:34:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:33.661 22:34:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:33.661 22:34:32 -- common/autotest_common.sh@861 -- # break 00:26:33.661 22:34:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:33.661 22:34:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:33.661 22:34:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:33.661 1+0 records in 00:26:33.661 1+0 records out 00:26:33.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333711 s, 12.3 MB/s 00:26:33.661 22:34:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:33.661 22:34:32 -- common/autotest_common.sh@874 -- # size=4096 00:26:33.661 22:34:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:33.661 22:34:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:33.661 22:34:32 -- common/autotest_common.sh@877 -- # return 0 00:26:33.661 22:34:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:33.661 22:34:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:33.661 22:34:32 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:33.661 22:34:32 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:33.661 22:34:32 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:33.661 22:34:32 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:34.237 496+0 records in 00:26:34.237 496+0 records out 00:26:34.237 97517568 bytes (98 MB, 93 MiB) copied, 0.450951 s, 216 MB/s 00:26:34.237 22:34:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:34.237 22:34:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:34.237 22:34:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:34.237 22:34:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:34.237 22:34:33 -- bdev/nbd_common.sh@51 -- # local i 00:26:34.237 22:34:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:34.238 22:34:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:34.496 [2024-07-14 22:34:33.449855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@41 -- # break 00:26:34.496 22:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:26:34.496 22:34:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:34.754 [2024-07-14 22:34:33.800869] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.754 22:34:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.011 22:34:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:35.011 "name": "raid_bdev1", 00:26:35.011 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:35.011 "strip_size_kb": 64, 00:26:35.011 "state": "online", 00:26:35.011 "raid_level": "raid5f", 00:26:35.011 "superblock": true, 00:26:35.011 "num_base_bdevs": 4, 00:26:35.011 "num_base_bdevs_discovered": 3, 00:26:35.011 "num_base_bdevs_operational": 3, 00:26:35.011 "base_bdevs_list": [ 00:26:35.011 { 00:26:35.011 "name": null, 00:26:35.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.011 "is_configured": false, 00:26:35.011 "data_offset": 2048, 00:26:35.011 "data_size": 63488 00:26:35.011 }, 00:26:35.011 { 00:26:35.011 "name": "BaseBdev2", 00:26:35.011 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:35.011 "is_configured": true, 00:26:35.011 "data_offset": 2048, 00:26:35.011 "data_size": 63488 00:26:35.011 }, 00:26:35.011 { 00:26:35.011 "name": "BaseBdev3", 00:26:35.011 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:35.011 "is_configured": true, 00:26:35.011 "data_offset": 2048, 00:26:35.011 "data_size": 63488 00:26:35.011 }, 00:26:35.011 { 00:26:35.011 "name": "BaseBdev4", 00:26:35.011 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:35.011 "is_configured": true, 00:26:35.011 "data_offset": 2048, 00:26:35.011 "data_size": 63488 00:26:35.011 } 00:26:35.011 ] 00:26:35.011 }' 00:26:35.011 22:34:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:35.011 22:34:34 -- common/autotest_common.sh@10 -- # set +x 00:26:35.943 22:34:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:35.943 [2024-07-14 22:34:34.965051] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:35.943 [2024-07-14 22:34:34.965115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:35.943 [2024-07-14 22:34:34.976086] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c860 00:26:35.943 [2024-07-14 22:34:34.983367] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:35.943 22:34:34 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.319 22:34:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.319 22:34:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:37.319 "name": "raid_bdev1", 00:26:37.319 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:37.319 "strip_size_kb": 64, 00:26:37.319 "state": "online", 00:26:37.319 "raid_level": "raid5f", 00:26:37.319 "superblock": true, 00:26:37.319 "num_base_bdevs": 4, 00:26:37.319 "num_base_bdevs_discovered": 4, 00:26:37.319 "num_base_bdevs_operational": 4, 00:26:37.319 "process": { 00:26:37.319 "type": "rebuild", 00:26:37.319 "target": "spare", 00:26:37.319 "progress": { 00:26:37.319 "blocks": 23040, 00:26:37.319 "percent": 12 00:26:37.319 } 00:26:37.319 }, 00:26:37.319 "base_bdevs_list": [ 00:26:37.319 { 00:26:37.319 "name": "spare", 00:26:37.319 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 }, 00:26:37.319 { 00:26:37.319 "name": "BaseBdev2", 00:26:37.319 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 }, 00:26:37.319 { 00:26:37.319 "name": "BaseBdev3", 00:26:37.319 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 }, 00:26:37.319 { 00:26:37.319 "name": "BaseBdev4", 00:26:37.319 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 } 00:26:37.319 ] 00:26:37.319 }' 00:26:37.319 22:34:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:37.319 22:34:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.319 22:34:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:37.319 22:34:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.319 22:34:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:37.578 [2024-07-14 22:34:36.488743] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:37.578 [2024-07-14 22:34:36.493456] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:37.578 [2024-07-14 22:34:36.493537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.578 22:34:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.836 22:34:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:37.837 "name": "raid_bdev1", 00:26:37.837 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:37.837 "strip_size_kb": 64, 00:26:37.837 "state": "online", 00:26:37.837 "raid_level": "raid5f", 00:26:37.837 "superblock": true, 00:26:37.837 "num_base_bdevs": 4, 00:26:37.837 "num_base_bdevs_discovered": 3, 00:26:37.837 "num_base_bdevs_operational": 3, 00:26:37.837 "base_bdevs_list": [ 00:26:37.837 { 00:26:37.837 "name": null, 00:26:37.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.837 "is_configured": false, 00:26:37.837 "data_offset": 2048, 00:26:37.837 "data_size": 63488 00:26:37.837 }, 00:26:37.837 { 00:26:37.837 "name": "BaseBdev2", 00:26:37.837 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:37.837 "is_configured": true, 00:26:37.837 "data_offset": 2048, 00:26:37.837 "data_size": 63488 00:26:37.837 }, 00:26:37.837 { 00:26:37.837 "name": "BaseBdev3", 00:26:37.837 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:37.837 "is_configured": true, 00:26:37.837 "data_offset": 2048, 00:26:37.837 "data_size": 63488 00:26:37.837 }, 00:26:37.837 { 00:26:37.837 "name": "BaseBdev4", 00:26:37.837 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:37.837 "is_configured": true, 00:26:37.837 "data_offset": 2048, 00:26:37.837 "data_size": 63488 00:26:37.837 } 00:26:37.837 ] 00:26:37.837 }' 00:26:37.837 22:34:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:37.837 22:34:36 -- common/autotest_common.sh@10 -- # set +x 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.402 22:34:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.661 22:34:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:38.661 "name": "raid_bdev1", 00:26:38.661 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:38.661 "strip_size_kb": 64, 00:26:38.661 "state": "online", 00:26:38.661 "raid_level": "raid5f", 00:26:38.661 "superblock": true, 00:26:38.661 "num_base_bdevs": 4, 00:26:38.661 "num_base_bdevs_discovered": 3, 00:26:38.661 "num_base_bdevs_operational": 3, 00:26:38.661 "base_bdevs_list": [ 00:26:38.661 { 00:26:38.661 "name": null, 00:26:38.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.661 "is_configured": false, 00:26:38.661 "data_offset": 2048, 00:26:38.661 "data_size": 63488 00:26:38.661 }, 00:26:38.661 { 00:26:38.661 "name": "BaseBdev2", 00:26:38.661 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:38.661 "is_configured": true, 00:26:38.661 "data_offset": 2048, 00:26:38.661 "data_size": 63488 00:26:38.661 }, 00:26:38.661 { 00:26:38.661 "name": "BaseBdev3", 00:26:38.661 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:38.661 "is_configured": true, 00:26:38.661 "data_offset": 2048, 00:26:38.661 "data_size": 63488 00:26:38.661 }, 00:26:38.661 { 00:26:38.661 "name": "BaseBdev4", 00:26:38.661 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:38.661 "is_configured": true, 00:26:38.661 "data_offset": 2048, 00:26:38.661 "data_size": 63488 00:26:38.661 } 00:26:38.661 ] 00:26:38.661 }' 00:26:38.661 22:34:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:38.661 22:34:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:38.661 22:34:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:38.661 22:34:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:38.661 22:34:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:38.920 [2024-07-14 22:34:37.944425] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:38.920 [2024-07-14 22:34:37.944486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:38.920 [2024-07-14 22:34:37.954445] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ca00 00:26:38.920 [2024-07-14 22:34:37.961192] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:38.920 22:34:37 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:40.295 22:34:38 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.295 22:34:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:40.296 22:34:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:40.296 22:34:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:40.296 22:34:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:40.296 22:34:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.296 22:34:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:40.296 "name": "raid_bdev1", 00:26:40.296 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:40.296 "strip_size_kb": 64, 00:26:40.296 "state": "online", 00:26:40.296 "raid_level": "raid5f", 00:26:40.296 "superblock": true, 00:26:40.296 "num_base_bdevs": 4, 00:26:40.296 "num_base_bdevs_discovered": 4, 00:26:40.296 "num_base_bdevs_operational": 4, 00:26:40.296 "process": { 00:26:40.296 "type": "rebuild", 00:26:40.296 "target": "spare", 00:26:40.296 "progress": { 00:26:40.296 "blocks": 23040, 00:26:40.296 "percent": 12 00:26:40.296 } 00:26:40.296 }, 00:26:40.296 "base_bdevs_list": [ 00:26:40.296 { 00:26:40.296 "name": "spare", 00:26:40.296 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:40.296 "is_configured": true, 00:26:40.296 "data_offset": 2048, 00:26:40.296 "data_size": 63488 00:26:40.296 }, 00:26:40.296 { 00:26:40.296 "name": "BaseBdev2", 00:26:40.296 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:40.296 "is_configured": true, 00:26:40.296 "data_offset": 2048, 00:26:40.296 "data_size": 63488 00:26:40.296 }, 00:26:40.296 { 00:26:40.296 "name": "BaseBdev3", 00:26:40.296 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:40.296 "is_configured": true, 00:26:40.296 "data_offset": 2048, 00:26:40.296 "data_size": 63488 00:26:40.296 }, 00:26:40.296 { 00:26:40.296 "name": "BaseBdev4", 00:26:40.296 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:40.296 "is_configured": true, 00:26:40.296 "data_offset": 2048, 00:26:40.296 "data_size": 63488 00:26:40.296 } 00:26:40.296 ] 00:26:40.296 }' 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:26:40.296 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@657 -- # local timeout=736 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.296 22:34:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.554 22:34:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:40.554 "name": "raid_bdev1", 00:26:40.554 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:40.554 "strip_size_kb": 64, 00:26:40.554 "state": "online", 00:26:40.554 "raid_level": "raid5f", 00:26:40.554 "superblock": true, 00:26:40.554 "num_base_bdevs": 4, 00:26:40.554 "num_base_bdevs_discovered": 4, 00:26:40.554 "num_base_bdevs_operational": 4, 00:26:40.554 "process": { 00:26:40.554 "type": "rebuild", 00:26:40.554 "target": "spare", 00:26:40.554 "progress": { 00:26:40.554 "blocks": 28800, 00:26:40.554 "percent": 15 00:26:40.554 } 00:26:40.554 }, 00:26:40.554 "base_bdevs_list": [ 00:26:40.554 { 00:26:40.554 "name": "spare", 00:26:40.554 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:40.554 "is_configured": true, 00:26:40.554 "data_offset": 2048, 00:26:40.554 "data_size": 63488 00:26:40.554 }, 00:26:40.554 { 00:26:40.554 "name": "BaseBdev2", 00:26:40.554 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:40.554 "is_configured": true, 00:26:40.554 "data_offset": 2048, 00:26:40.554 "data_size": 63488 00:26:40.554 }, 00:26:40.554 { 00:26:40.554 "name": "BaseBdev3", 00:26:40.554 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:40.554 "is_configured": true, 00:26:40.554 "data_offset": 2048, 00:26:40.554 "data_size": 63488 00:26:40.554 }, 00:26:40.554 { 00:26:40.554 "name": "BaseBdev4", 00:26:40.554 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:40.554 "is_configured": true, 00:26:40.554 "data_offset": 2048, 00:26:40.554 "data_size": 63488 00:26:40.554 } 00:26:40.554 ] 00:26:40.554 }' 00:26:40.554 22:34:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:40.554 22:34:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.554 22:34:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:40.554 22:34:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.554 22:34:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:41.927 "name": "raid_bdev1", 00:26:41.927 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:41.927 "strip_size_kb": 64, 00:26:41.927 "state": "online", 00:26:41.927 "raid_level": "raid5f", 00:26:41.927 "superblock": true, 00:26:41.927 "num_base_bdevs": 4, 00:26:41.927 "num_base_bdevs_discovered": 4, 00:26:41.927 "num_base_bdevs_operational": 4, 00:26:41.927 "process": { 00:26:41.927 "type": "rebuild", 00:26:41.927 "target": "spare", 00:26:41.927 "progress": { 00:26:41.927 "blocks": 53760, 00:26:41.927 "percent": 28 00:26:41.927 } 00:26:41.927 }, 00:26:41.927 "base_bdevs_list": [ 00:26:41.927 { 00:26:41.927 "name": "spare", 00:26:41.927 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:41.927 "is_configured": true, 00:26:41.927 "data_offset": 2048, 00:26:41.927 "data_size": 63488 00:26:41.927 }, 00:26:41.927 { 00:26:41.927 "name": "BaseBdev2", 00:26:41.927 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:41.927 "is_configured": true, 00:26:41.927 "data_offset": 2048, 00:26:41.927 "data_size": 63488 00:26:41.927 }, 00:26:41.927 { 00:26:41.927 "name": "BaseBdev3", 00:26:41.927 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:41.927 "is_configured": true, 00:26:41.927 "data_offset": 2048, 00:26:41.927 "data_size": 63488 00:26:41.927 }, 00:26:41.927 { 00:26:41.927 "name": "BaseBdev4", 00:26:41.927 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:41.927 "is_configured": true, 00:26:41.927 "data_offset": 2048, 00:26:41.927 "data_size": 63488 00:26:41.927 } 00:26:41.927 ] 00:26:41.927 }' 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:41.927 22:34:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.303 22:34:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.303 22:34:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:43.303 "name": "raid_bdev1", 00:26:43.303 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:43.303 "strip_size_kb": 64, 00:26:43.303 "state": "online", 00:26:43.303 "raid_level": "raid5f", 00:26:43.303 "superblock": true, 00:26:43.303 "num_base_bdevs": 4, 00:26:43.303 "num_base_bdevs_discovered": 4, 00:26:43.303 "num_base_bdevs_operational": 4, 00:26:43.303 "process": { 00:26:43.303 "type": "rebuild", 00:26:43.303 "target": "spare", 00:26:43.303 "progress": { 00:26:43.303 "blocks": 80640, 00:26:43.303 "percent": 42 00:26:43.303 } 00:26:43.303 }, 00:26:43.303 "base_bdevs_list": [ 00:26:43.303 { 00:26:43.303 "name": "spare", 00:26:43.303 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:43.303 "is_configured": true, 00:26:43.303 "data_offset": 2048, 00:26:43.303 "data_size": 63488 00:26:43.303 }, 00:26:43.303 { 00:26:43.303 "name": "BaseBdev2", 00:26:43.303 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:43.303 "is_configured": true, 00:26:43.303 "data_offset": 2048, 00:26:43.303 "data_size": 63488 00:26:43.303 }, 00:26:43.303 { 00:26:43.303 "name": "BaseBdev3", 00:26:43.303 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:43.303 "is_configured": true, 00:26:43.303 "data_offset": 2048, 00:26:43.303 "data_size": 63488 00:26:43.303 }, 00:26:43.303 { 00:26:43.303 "name": "BaseBdev4", 00:26:43.303 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:43.303 "is_configured": true, 00:26:43.303 "data_offset": 2048, 00:26:43.303 "data_size": 63488 00:26:43.303 } 00:26:43.303 ] 00:26:43.303 }' 00:26:43.303 22:34:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:43.303 22:34:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.303 22:34:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:43.303 22:34:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.303 22:34:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:44.679 "name": "raid_bdev1", 00:26:44.679 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:44.679 "strip_size_kb": 64, 00:26:44.679 "state": "online", 00:26:44.679 "raid_level": "raid5f", 00:26:44.679 "superblock": true, 00:26:44.679 "num_base_bdevs": 4, 00:26:44.679 "num_base_bdevs_discovered": 4, 00:26:44.679 "num_base_bdevs_operational": 4, 00:26:44.679 "process": { 00:26:44.679 "type": "rebuild", 00:26:44.679 "target": "spare", 00:26:44.679 "progress": { 00:26:44.679 "blocks": 105600, 00:26:44.679 "percent": 55 00:26:44.679 } 00:26:44.679 }, 00:26:44.679 "base_bdevs_list": [ 00:26:44.679 { 00:26:44.679 "name": "spare", 00:26:44.679 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:44.679 "is_configured": true, 00:26:44.679 "data_offset": 2048, 00:26:44.679 "data_size": 63488 00:26:44.679 }, 00:26:44.679 { 00:26:44.679 "name": "BaseBdev2", 00:26:44.679 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:44.679 "is_configured": true, 00:26:44.679 "data_offset": 2048, 00:26:44.679 "data_size": 63488 00:26:44.679 }, 00:26:44.679 { 00:26:44.679 "name": "BaseBdev3", 00:26:44.679 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:44.679 "is_configured": true, 00:26:44.679 "data_offset": 2048, 00:26:44.679 "data_size": 63488 00:26:44.679 }, 00:26:44.679 { 00:26:44.679 "name": "BaseBdev4", 00:26:44.679 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:44.679 "is_configured": true, 00:26:44.679 "data_offset": 2048, 00:26:44.679 "data_size": 63488 00:26:44.679 } 00:26:44.679 ] 00:26:44.679 }' 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:44.679 22:34:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:46.053 "name": "raid_bdev1", 00:26:46.053 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:46.053 "strip_size_kb": 64, 00:26:46.053 "state": "online", 00:26:46.053 "raid_level": "raid5f", 00:26:46.053 "superblock": true, 00:26:46.053 "num_base_bdevs": 4, 00:26:46.053 "num_base_bdevs_discovered": 4, 00:26:46.053 "num_base_bdevs_operational": 4, 00:26:46.053 "process": { 00:26:46.053 "type": "rebuild", 00:26:46.053 "target": "spare", 00:26:46.053 "progress": { 00:26:46.053 "blocks": 132480, 00:26:46.053 "percent": 69 00:26:46.053 } 00:26:46.053 }, 00:26:46.053 "base_bdevs_list": [ 00:26:46.053 { 00:26:46.053 "name": "spare", 00:26:46.053 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:46.053 "is_configured": true, 00:26:46.053 "data_offset": 2048, 00:26:46.053 "data_size": 63488 00:26:46.053 }, 00:26:46.053 { 00:26:46.053 "name": "BaseBdev2", 00:26:46.053 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:46.053 "is_configured": true, 00:26:46.053 "data_offset": 2048, 00:26:46.053 "data_size": 63488 00:26:46.053 }, 00:26:46.053 { 00:26:46.053 "name": "BaseBdev3", 00:26:46.053 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:46.053 "is_configured": true, 00:26:46.053 "data_offset": 2048, 00:26:46.053 "data_size": 63488 00:26:46.053 }, 00:26:46.053 { 00:26:46.053 "name": "BaseBdev4", 00:26:46.053 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:46.053 "is_configured": true, 00:26:46.053 "data_offset": 2048, 00:26:46.053 "data_size": 63488 00:26:46.053 } 00:26:46.053 ] 00:26:46.053 }' 00:26:46.053 22:34:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:46.053 22:34:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.053 22:34:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:46.053 22:34:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.053 22:34:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.073 22:34:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.331 22:34:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:47.331 "name": "raid_bdev1", 00:26:47.331 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:47.331 "strip_size_kb": 64, 00:26:47.331 "state": "online", 00:26:47.331 "raid_level": "raid5f", 00:26:47.331 "superblock": true, 00:26:47.331 "num_base_bdevs": 4, 00:26:47.331 "num_base_bdevs_discovered": 4, 00:26:47.331 "num_base_bdevs_operational": 4, 00:26:47.331 "process": { 00:26:47.331 "type": "rebuild", 00:26:47.331 "target": "spare", 00:26:47.332 "progress": { 00:26:47.332 "blocks": 157440, 00:26:47.332 "percent": 82 00:26:47.332 } 00:26:47.332 }, 00:26:47.332 "base_bdevs_list": [ 00:26:47.332 { 00:26:47.332 "name": "spare", 00:26:47.332 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:47.332 "is_configured": true, 00:26:47.332 "data_offset": 2048, 00:26:47.332 "data_size": 63488 00:26:47.332 }, 00:26:47.332 { 00:26:47.332 "name": "BaseBdev2", 00:26:47.332 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:47.332 "is_configured": true, 00:26:47.332 "data_offset": 2048, 00:26:47.332 "data_size": 63488 00:26:47.332 }, 00:26:47.332 { 00:26:47.332 "name": "BaseBdev3", 00:26:47.332 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:47.332 "is_configured": true, 00:26:47.332 "data_offset": 2048, 00:26:47.332 "data_size": 63488 00:26:47.332 }, 00:26:47.332 { 00:26:47.332 "name": "BaseBdev4", 00:26:47.332 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:47.332 "is_configured": true, 00:26:47.332 "data_offset": 2048, 00:26:47.332 "data_size": 63488 00:26:47.332 } 00:26:47.332 ] 00:26:47.332 }' 00:26:47.332 22:34:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:47.332 22:34:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:47.332 22:34:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:47.590 22:34:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:47.590 22:34:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.524 22:34:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.782 22:34:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:48.782 "name": "raid_bdev1", 00:26:48.782 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:48.782 "strip_size_kb": 64, 00:26:48.782 "state": "online", 00:26:48.782 "raid_level": "raid5f", 00:26:48.782 "superblock": true, 00:26:48.782 "num_base_bdevs": 4, 00:26:48.782 "num_base_bdevs_discovered": 4, 00:26:48.782 "num_base_bdevs_operational": 4, 00:26:48.782 "process": { 00:26:48.782 "type": "rebuild", 00:26:48.782 "target": "spare", 00:26:48.782 "progress": { 00:26:48.782 "blocks": 184320, 00:26:48.782 "percent": 96 00:26:48.782 } 00:26:48.782 }, 00:26:48.782 "base_bdevs_list": [ 00:26:48.782 { 00:26:48.782 "name": "spare", 00:26:48.782 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:48.782 "is_configured": true, 00:26:48.782 "data_offset": 2048, 00:26:48.782 "data_size": 63488 00:26:48.782 }, 00:26:48.782 { 00:26:48.782 "name": "BaseBdev2", 00:26:48.782 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:48.782 "is_configured": true, 00:26:48.782 "data_offset": 2048, 00:26:48.782 "data_size": 63488 00:26:48.782 }, 00:26:48.782 { 00:26:48.782 "name": "BaseBdev3", 00:26:48.782 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:48.782 "is_configured": true, 00:26:48.782 "data_offset": 2048, 00:26:48.782 "data_size": 63488 00:26:48.782 }, 00:26:48.782 { 00:26:48.782 "name": "BaseBdev4", 00:26:48.783 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:48.783 "is_configured": true, 00:26:48.783 "data_offset": 2048, 00:26:48.783 "data_size": 63488 00:26:48.783 } 00:26:48.783 ] 00:26:48.783 }' 00:26:48.783 22:34:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:48.783 22:34:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.783 22:34:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:48.783 22:34:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.783 22:34:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:49.041 [2024-07-14 22:34:48.029038] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:49.041 [2024-07-14 22:34:48.029123] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:49.041 [2024-07-14 22:34:48.029302] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.977 22:34:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:50.236 "name": "raid_bdev1", 00:26:50.236 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:50.236 "strip_size_kb": 64, 00:26:50.236 "state": "online", 00:26:50.236 "raid_level": "raid5f", 00:26:50.236 "superblock": true, 00:26:50.236 "num_base_bdevs": 4, 00:26:50.236 "num_base_bdevs_discovered": 4, 00:26:50.236 "num_base_bdevs_operational": 4, 00:26:50.236 "base_bdevs_list": [ 00:26:50.236 { 00:26:50.236 "name": "spare", 00:26:50.236 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:50.236 "is_configured": true, 00:26:50.236 "data_offset": 2048, 00:26:50.236 "data_size": 63488 00:26:50.236 }, 00:26:50.236 { 00:26:50.236 "name": "BaseBdev2", 00:26:50.236 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:50.236 "is_configured": true, 00:26:50.236 "data_offset": 2048, 00:26:50.236 "data_size": 63488 00:26:50.236 }, 00:26:50.236 { 00:26:50.236 "name": "BaseBdev3", 00:26:50.236 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:50.236 "is_configured": true, 00:26:50.236 "data_offset": 2048, 00:26:50.236 "data_size": 63488 00:26:50.236 }, 00:26:50.236 { 00:26:50.236 "name": "BaseBdev4", 00:26:50.236 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:50.236 "is_configured": true, 00:26:50.236 "data_offset": 2048, 00:26:50.236 "data_size": 63488 00:26:50.236 } 00:26:50.236 ] 00:26:50.236 }' 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@660 -- # break 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.236 22:34:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.494 22:34:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:50.494 "name": "raid_bdev1", 00:26:50.494 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:50.494 "strip_size_kb": 64, 00:26:50.494 "state": "online", 00:26:50.494 "raid_level": "raid5f", 00:26:50.494 "superblock": true, 00:26:50.494 "num_base_bdevs": 4, 00:26:50.494 "num_base_bdevs_discovered": 4, 00:26:50.494 "num_base_bdevs_operational": 4, 00:26:50.494 "base_bdevs_list": [ 00:26:50.494 { 00:26:50.494 "name": "spare", 00:26:50.494 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:50.494 "is_configured": true, 00:26:50.494 "data_offset": 2048, 00:26:50.494 "data_size": 63488 00:26:50.494 }, 00:26:50.494 { 00:26:50.494 "name": "BaseBdev2", 00:26:50.494 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:50.494 "is_configured": true, 00:26:50.494 "data_offset": 2048, 00:26:50.494 "data_size": 63488 00:26:50.494 }, 00:26:50.494 { 00:26:50.494 "name": "BaseBdev3", 00:26:50.494 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:50.494 "is_configured": true, 00:26:50.494 "data_offset": 2048, 00:26:50.494 "data_size": 63488 00:26:50.494 }, 00:26:50.494 { 00:26:50.494 "name": "BaseBdev4", 00:26:50.494 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:50.494 "is_configured": true, 00:26:50.494 "data_offset": 2048, 00:26:50.494 "data_size": 63488 00:26:50.494 } 00:26:50.494 ] 00:26:50.494 }' 00:26:50.494 22:34:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:50.494 22:34:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.494 22:34:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.753 22:34:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.011 22:34:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.011 "name": "raid_bdev1", 00:26:51.011 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:51.011 "strip_size_kb": 64, 00:26:51.011 "state": "online", 00:26:51.011 "raid_level": "raid5f", 00:26:51.011 "superblock": true, 00:26:51.011 "num_base_bdevs": 4, 00:26:51.011 "num_base_bdevs_discovered": 4, 00:26:51.011 "num_base_bdevs_operational": 4, 00:26:51.011 "base_bdevs_list": [ 00:26:51.011 { 00:26:51.011 "name": "spare", 00:26:51.011 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:51.011 "is_configured": true, 00:26:51.011 "data_offset": 2048, 00:26:51.011 "data_size": 63488 00:26:51.011 }, 00:26:51.011 { 00:26:51.011 "name": "BaseBdev2", 00:26:51.011 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:51.011 "is_configured": true, 00:26:51.011 "data_offset": 2048, 00:26:51.011 "data_size": 63488 00:26:51.011 }, 00:26:51.011 { 00:26:51.011 "name": "BaseBdev3", 00:26:51.011 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:51.011 "is_configured": true, 00:26:51.011 "data_offset": 2048, 00:26:51.011 "data_size": 63488 00:26:51.011 }, 00:26:51.011 { 00:26:51.011 "name": "BaseBdev4", 00:26:51.011 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:51.011 "is_configured": true, 00:26:51.011 "data_offset": 2048, 00:26:51.011 "data_size": 63488 00:26:51.011 } 00:26:51.011 ] 00:26:51.011 }' 00:26:51.011 22:34:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.011 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 22:34:50 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:51.834 [2024-07-14 22:34:50.872022] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:51.834 [2024-07-14 22:34:50.872054] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:51.834 [2024-07-14 22:34:50.872183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:51.834 [2024-07-14 22:34:50.872330] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:51.834 [2024-07-14 22:34:50.872346] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:26:51.834 22:34:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.834 22:34:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:52.092 22:34:51 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:52.092 22:34:51 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:52.092 22:34:51 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@12 -- # local i 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.092 22:34:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:52.351 /dev/nbd0 00:26:52.351 22:34:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:52.351 22:34:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:52.351 22:34:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:52.351 22:34:51 -- common/autotest_common.sh@857 -- # local i 00:26:52.351 22:34:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:52.351 22:34:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:52.351 22:34:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:52.351 22:34:51 -- common/autotest_common.sh@861 -- # break 00:26:52.351 22:34:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:52.351 22:34:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:52.351 22:34:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.351 1+0 records in 00:26:52.351 1+0 records out 00:26:52.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619282 s, 6.6 MB/s 00:26:52.351 22:34:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.351 22:34:51 -- common/autotest_common.sh@874 -- # size=4096 00:26:52.351 22:34:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.351 22:34:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:52.351 22:34:51 -- common/autotest_common.sh@877 -- # return 0 00:26:52.351 22:34:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.351 22:34:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.351 22:34:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:52.610 /dev/nbd1 00:26:52.610 22:34:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:52.610 22:34:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:52.610 22:34:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:52.610 22:34:51 -- common/autotest_common.sh@857 -- # local i 00:26:52.610 22:34:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:52.610 22:34:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:52.610 22:34:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:52.610 22:34:51 -- common/autotest_common.sh@861 -- # break 00:26:52.610 22:34:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:52.610 22:34:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:52.610 22:34:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.610 1+0 records in 00:26:52.610 1+0 records out 00:26:52.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646869 s, 6.3 MB/s 00:26:52.610 22:34:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.610 22:34:51 -- common/autotest_common.sh@874 -- # size=4096 00:26:52.610 22:34:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.610 22:34:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:52.610 22:34:51 -- common/autotest_common.sh@877 -- # return 0 00:26:52.610 22:34:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.610 22:34:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.610 22:34:51 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:52.868 22:34:51 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:52.868 22:34:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:52.868 22:34:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:52.868 22:34:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:52.868 22:34:51 -- bdev/nbd_common.sh@51 -- # local i 00:26:52.868 22:34:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.868 22:34:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@41 -- # break 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.126 22:34:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:53.385 22:34:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:53.644 22:34:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:53.644 22:34:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.644 22:34:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:53.644 22:34:52 -- bdev/nbd_common.sh@41 -- # break 00:26:53.644 22:34:52 -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.644 22:34:52 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:53.644 22:34:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:53.644 22:34:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:53.644 22:34:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:53.902 22:34:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:54.161 [2024-07-14 22:34:53.042960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:54.161 [2024-07-14 22:34:53.043140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.161 [2024-07-14 22:34:53.043185] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:54.161 [2024-07-14 22:34:53.043210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.161 [2024-07-14 22:34:53.045879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.161 [2024-07-14 22:34:53.045962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:54.161 [2024-07-14 22:34:53.046190] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:54.161 [2024-07-14 22:34:53.046249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:54.161 BaseBdev1 00:26:54.161 22:34:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:54.161 22:34:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:54.161 22:34:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:54.419 22:34:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:54.678 [2024-07-14 22:34:53.503130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:54.678 [2024-07-14 22:34:53.503235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.678 [2024-07-14 22:34:53.503341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:54.678 [2024-07-14 22:34:53.503365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.678 [2024-07-14 22:34:53.503954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.678 [2024-07-14 22:34:53.504099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:54.678 [2024-07-14 22:34:53.504202] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:54.678 [2024-07-14 22:34:53.504231] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:54.678 [2024-07-14 22:34:53.504238] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:54.678 [2024-07-14 22:34:53.504288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:26:54.678 [2024-07-14 22:34:53.504368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:54.678 BaseBdev2 00:26:54.678 22:34:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:54.678 22:34:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:54.678 22:34:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:54.678 22:34:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:54.936 [2024-07-14 22:34:53.891196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:54.936 [2024-07-14 22:34:53.891272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.936 [2024-07-14 22:34:53.891301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:54.936 [2024-07-14 22:34:53.891324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.936 [2024-07-14 22:34:53.891830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.936 [2024-07-14 22:34:53.891958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:54.936 [2024-07-14 22:34:53.892043] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:54.936 [2024-07-14 22:34:53.892068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:54.936 BaseBdev3 00:26:54.936 22:34:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:54.936 22:34:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:54.936 22:34:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:55.193 22:34:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:55.451 [2024-07-14 22:34:54.347305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:55.451 [2024-07-14 22:34:54.347415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.451 [2024-07-14 22:34:54.347444] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:55.451 [2024-07-14 22:34:54.347469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.451 [2024-07-14 22:34:54.347989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.451 [2024-07-14 22:34:54.348056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:55.451 [2024-07-14 22:34:54.348177] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:55.451 [2024-07-14 22:34:54.348203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:55.451 BaseBdev4 00:26:55.451 22:34:54 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:55.708 [2024-07-14 22:34:54.739357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:55.708 [2024-07-14 22:34:54.739431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.708 [2024-07-14 22:34:54.739459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:55.708 [2024-07-14 22:34:54.739483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.708 [2024-07-14 22:34:54.740073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.708 [2024-07-14 22:34:54.740151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:55.708 [2024-07-14 22:34:54.740225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:55.708 [2024-07-14 22:34:54.740258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:55.708 spare 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.708 22:34:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.966 [2024-07-14 22:34:54.840460] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:26:55.966 [2024-07-14 22:34:54.840484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:55.966 [2024-07-14 22:34:54.840607] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004cc50 00:26:55.966 [2024-07-14 22:34:54.845874] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:26:55.966 [2024-07-14 22:34:54.845897] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:26:55.966 [2024-07-14 22:34:54.846056] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.966 22:34:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:55.966 "name": "raid_bdev1", 00:26:55.966 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:55.966 "strip_size_kb": 64, 00:26:55.966 "state": "online", 00:26:55.966 "raid_level": "raid5f", 00:26:55.966 "superblock": true, 00:26:55.966 "num_base_bdevs": 4, 00:26:55.966 "num_base_bdevs_discovered": 4, 00:26:55.966 "num_base_bdevs_operational": 4, 00:26:55.966 "base_bdevs_list": [ 00:26:55.966 { 00:26:55.966 "name": "spare", 00:26:55.966 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:55.966 "is_configured": true, 00:26:55.966 "data_offset": 2048, 00:26:55.966 "data_size": 63488 00:26:55.966 }, 00:26:55.966 { 00:26:55.966 "name": "BaseBdev2", 00:26:55.966 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:55.966 "is_configured": true, 00:26:55.966 "data_offset": 2048, 00:26:55.966 "data_size": 63488 00:26:55.966 }, 00:26:55.966 { 00:26:55.966 "name": "BaseBdev3", 00:26:55.966 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:55.966 "is_configured": true, 00:26:55.966 "data_offset": 2048, 00:26:55.966 "data_size": 63488 00:26:55.966 }, 00:26:55.966 { 00:26:55.966 "name": "BaseBdev4", 00:26:55.966 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:55.966 "is_configured": true, 00:26:55.966 "data_offset": 2048, 00:26:55.966 "data_size": 63488 00:26:55.966 } 00:26:55.966 ] 00:26:55.966 }' 00:26:55.966 22:34:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:55.966 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.532 22:34:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.791 22:34:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:56.791 "name": "raid_bdev1", 00:26:56.791 "uuid": "d3a3cb11-98d9-49ed-90d6-d59b8cf9b205", 00:26:56.791 "strip_size_kb": 64, 00:26:56.791 "state": "online", 00:26:56.791 "raid_level": "raid5f", 00:26:56.791 "superblock": true, 00:26:56.791 "num_base_bdevs": 4, 00:26:56.791 "num_base_bdevs_discovered": 4, 00:26:56.791 "num_base_bdevs_operational": 4, 00:26:56.791 "base_bdevs_list": [ 00:26:56.791 { 00:26:56.791 "name": "spare", 00:26:56.791 "uuid": "1e2468c1-8359-5638-ae3c-d13f74ecca3a", 00:26:56.791 "is_configured": true, 00:26:56.791 "data_offset": 2048, 00:26:56.791 "data_size": 63488 00:26:56.791 }, 00:26:56.791 { 00:26:56.791 "name": "BaseBdev2", 00:26:56.791 "uuid": "45af472b-f530-52c2-a542-ed3f86d59aa7", 00:26:56.791 "is_configured": true, 00:26:56.791 "data_offset": 2048, 00:26:56.791 "data_size": 63488 00:26:56.791 }, 00:26:56.791 { 00:26:56.791 "name": "BaseBdev3", 00:26:56.791 "uuid": "e1856d8d-b2e0-56c5-a241-20826544cf11", 00:26:56.791 "is_configured": true, 00:26:56.791 "data_offset": 2048, 00:26:56.791 "data_size": 63488 00:26:56.791 }, 00:26:56.791 { 00:26:56.791 "name": "BaseBdev4", 00:26:56.791 "uuid": "db6fa60e-3afe-574b-83e4-e961b980c445", 00:26:56.791 "is_configured": true, 00:26:56.791 "data_offset": 2048, 00:26:56.791 "data_size": 63488 00:26:56.791 } 00:26:56.791 ] 00:26:56.791 }' 00:26:56.791 22:34:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:56.791 22:34:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:56.791 22:34:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:57.049 22:34:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:57.049 22:34:55 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.049 22:34:55 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:57.049 22:34:56 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.049 22:34:56 -- bdev/bdev_raid.sh@709 -- # killprocess 135458 00:26:57.049 22:34:56 -- common/autotest_common.sh@926 -- # '[' -z 135458 ']' 00:26:57.049 22:34:56 -- common/autotest_common.sh@930 -- # kill -0 135458 00:26:57.049 22:34:56 -- common/autotest_common.sh@931 -- # uname 00:26:57.049 22:34:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:57.049 22:34:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135458 00:26:57.049 killing process with pid 135458 00:26:57.049 Received shutdown signal, test time was about 60.000000 seconds 00:26:57.049 00:26:57.049 Latency(us) 00:26:57.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.049 =================================================================================================================== 00:26:57.049 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:57.049 22:34:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:57.049 22:34:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:57.049 22:34:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135458' 00:26:57.049 22:34:56 -- common/autotest_common.sh@945 -- # kill 135458 00:26:57.049 22:34:56 -- common/autotest_common.sh@950 -- # wait 135458 00:26:57.049 [2024-07-14 22:34:56.110757] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:57.049 [2024-07-14 22:34:56.110831] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:57.049 [2024-07-14 22:34:56.110961] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:57.049 [2024-07-14 22:34:56.110980] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:26:57.615 [2024-07-14 22:34:56.435398] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.550 ************************************ 00:26:58.550 END TEST raid5f_rebuild_test_sb 00:26:58.550 ************************************ 00:26:58.550 22:34:57 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:58.550 00:26:58.550 real 0m29.967s 00:26:58.550 user 0m46.345s 00:26:58.550 sys 0m3.011s 00:26:58.550 22:34:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.550 22:34:57 -- common/autotest_common.sh@10 -- # set +x 00:26:58.550 22:34:57 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:58.550 00:26:58.550 real 12m3.442s 00:26:58.550 user 20m1.497s 00:26:58.550 sys 1m29.951s 00:26:58.550 22:34:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.550 ************************************ 00:26:58.550 END TEST bdev_raid 00:26:58.550 ************************************ 00:26:58.550 22:34:57 -- common/autotest_common.sh@10 -- # set +x 00:26:58.550 22:34:57 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:58.550 22:34:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:58.550 22:34:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:58.550 22:34:57 -- common/autotest_common.sh@10 -- # set +x 00:26:58.550 ************************************ 00:26:58.550 START TEST bdevperf_config 00:26:58.550 ************************************ 00:26:58.550 22:34:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:58.550 * Looking for test storage... 00:26:58.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:58.550 22:34:57 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:58.550 22:34:57 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:58.550 22:34:57 -- bdevperf/common.sh@9 -- # local rw=read 00:26:58.550 22:34:57 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:58.550 22:34:57 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:58.550 22:34:57 -- bdevperf/common.sh@13 -- # cat 00:26:58.550 22:34:57 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:58.550 00:26:58.550 22:34:57 -- bdevperf/common.sh@19 -- # echo 00:26:58.550 22:34:57 -- bdevperf/common.sh@20 -- # cat 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:58.550 22:34:57 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:58.550 22:34:57 -- bdevperf/common.sh@9 -- # local rw= 00:26:58.550 22:34:57 -- bdevperf/common.sh@10 -- # local filename= 00:26:58.550 00:26:58.550 22:34:57 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:58.550 22:34:57 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:58.550 22:34:57 -- bdevperf/common.sh@19 -- # echo 00:26:58.550 22:34:57 -- bdevperf/common.sh@20 -- # cat 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:58.550 22:34:57 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:58.550 22:34:57 -- bdevperf/common.sh@9 -- # local rw= 00:26:58.550 22:34:57 -- bdevperf/common.sh@10 -- # local filename= 00:26:58.550 22:34:57 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:58.550 00:26:58.550 22:34:57 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:58.550 22:34:57 -- bdevperf/common.sh@19 -- # echo 00:26:58.550 22:34:57 -- bdevperf/common.sh@20 -- # cat 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:58.550 22:34:57 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:58.550 22:34:57 -- bdevperf/common.sh@9 -- # local rw= 00:26:58.550 22:34:57 -- bdevperf/common.sh@10 -- # local filename= 00:26:58.550 22:34:57 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:58.550 00:26:58.550 22:34:57 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:58.550 22:34:57 -- bdevperf/common.sh@19 -- # echo 00:26:58.550 22:34:57 -- bdevperf/common.sh@20 -- # cat 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:58.550 22:34:57 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:58.550 22:34:57 -- bdevperf/common.sh@9 -- # local rw= 00:26:58.550 22:34:57 -- bdevperf/common.sh@10 -- # local filename= 00:26:58.550 22:34:57 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:58.550 22:34:57 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:58.550 00:26:58.550 22:34:57 -- bdevperf/common.sh@19 -- # echo 00:26:58.550 22:34:57 -- bdevperf/common.sh@20 -- # cat 00:26:58.550 22:34:57 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:02.737 22:35:01 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-14 22:34:57.625165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:02.737 [2024-07-14 22:34:57.625368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136284 ] 00:27:02.737 Using job config with 4 jobs 00:27:02.737 [2024-07-14 22:34:57.794076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.737 [2024-07-14 22:34:57.975944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.737 cpumask for '\''job0'\'' is too big 00:27:02.737 cpumask for '\''job1'\'' is too big 00:27:02.737 cpumask for '\''job2'\'' is too big 00:27:02.737 cpumask for '\''job3'\'' is too big 00:27:02.737 Running I/O for 2 seconds... 00:27:02.737 00:27:02.737 Latency(us) 00:27:02.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.01 32190.39 31.44 0.00 0.00 7946.58 1459.67 11915.64 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.01 32162.86 31.41 0.00 0.00 7939.71 1400.09 11081.54 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.02 32137.79 31.38 0.00 0.00 7933.64 1377.75 11141.12 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.02 32212.60 31.46 0.00 0.00 7902.78 670.25 10902.81 00:27:02.737 =================================================================================================================== 00:27:02.737 Total : 128703.65 125.69 0.00 0.00 7930.65 670.25 11915.64' 00:27:02.737 22:35:01 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-14 22:34:57.625165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:02.737 [2024-07-14 22:34:57.625368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136284 ] 00:27:02.737 Using job config with 4 jobs 00:27:02.737 [2024-07-14 22:34:57.794076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.737 [2024-07-14 22:34:57.975944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.737 cpumask for '\''job0'\'' is too big 00:27:02.737 cpumask for '\''job1'\'' is too big 00:27:02.737 cpumask for '\''job2'\'' is too big 00:27:02.737 cpumask for '\''job3'\'' is too big 00:27:02.737 Running I/O for 2 seconds... 00:27:02.737 00:27:02.737 Latency(us) 00:27:02.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.01 32190.39 31.44 0.00 0.00 7946.58 1459.67 11915.64 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.01 32162.86 31.41 0.00 0.00 7939.71 1400.09 11081.54 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.02 32137.79 31.38 0.00 0.00 7933.64 1377.75 11141.12 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.02 32212.60 31.46 0.00 0.00 7902.78 670.25 10902.81 00:27:02.737 =================================================================================================================== 00:27:02.737 Total : 128703.65 125.69 0.00 0.00 7930.65 670.25 11915.64' 00:27:02.737 22:35:01 -- bdevperf/common.sh@32 -- # echo '[2024-07-14 22:34:57.625165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:02.737 [2024-07-14 22:34:57.625368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136284 ] 00:27:02.737 Using job config with 4 jobs 00:27:02.737 [2024-07-14 22:34:57.794076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.737 [2024-07-14 22:34:57.975944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.737 cpumask for '\''job0'\'' is too big 00:27:02.737 cpumask for '\''job1'\'' is too big 00:27:02.737 cpumask for '\''job2'\'' is too big 00:27:02.737 cpumask for '\''job3'\'' is too big 00:27:02.737 Running I/O for 2 seconds... 00:27:02.737 00:27:02.737 Latency(us) 00:27:02.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.01 32190.39 31.44 0.00 0.00 7946.58 1459.67 11915.64 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.01 32162.86 31.41 0.00 0.00 7939.71 1400.09 11081.54 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.02 32137.79 31.38 0.00 0.00 7933.64 1377.75 11141.12 00:27:02.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:02.737 Malloc0 : 2.02 32212.60 31.46 0.00 0.00 7902.78 670.25 10902.81 00:27:02.737 =================================================================================================================== 00:27:02.737 Total : 128703.65 125.69 0.00 0.00 7930.65 670.25 11915.64' 00:27:02.737 22:35:01 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:02.737 22:35:01 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:02.737 22:35:01 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:27:02.737 22:35:01 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:02.737 [2024-07-14 22:35:01.631449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:02.737 [2024-07-14 22:35:01.631631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136358 ] 00:27:02.737 [2024-07-14 22:35:01.798018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.995 [2024-07-14 22:35:01.998239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.561 cpumask for 'job0' is too big 00:27:03.561 cpumask for 'job1' is too big 00:27:03.561 cpumask for 'job2' is too big 00:27:03.561 cpumask for 'job3' is too big 00:27:06.846 22:35:05 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:27:06.846 Running I/O for 2 seconds... 00:27:06.846 00:27:06.847 Latency(us) 00:27:06.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.847 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:06.847 Malloc0 : 2.01 31960.96 31.21 0.00 0.00 8003.65 1630.95 12630.57 00:27:06.847 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:06.847 Malloc0 : 2.02 31970.28 31.22 0.00 0.00 7985.72 1846.92 10604.92 00:27:06.847 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:06.847 Malloc0 : 2.02 31949.70 31.20 0.00 0.00 7977.77 1489.45 10426.18 00:27:06.847 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:06.847 Malloc0 : 2.02 31929.20 31.18 0.00 0.00 7969.73 1340.51 10307.03 00:27:06.847 =================================================================================================================== 00:27:06.847 Total : 127810.14 124.81 0.00 0.00 7984.20 1340.51 12630.57' 00:27:06.847 22:35:05 -- bdevperf/test_config.sh@27 -- # cleanup 00:27:06.847 22:35:05 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:06.847 22:35:05 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:27:06.847 22:35:05 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:06.847 22:35:05 -- bdevperf/common.sh@9 -- # local rw=write 00:27:06.847 22:35:05 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:06.847 22:35:05 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:06.847 22:35:05 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:06.847 00:27:06.847 22:35:05 -- bdevperf/common.sh@19 -- # echo 00:27:06.847 22:35:05 -- bdevperf/common.sh@20 -- # cat 00:27:06.847 22:35:05 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:27:06.847 22:35:05 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:06.847 22:35:05 -- bdevperf/common.sh@9 -- # local rw=write 00:27:06.847 22:35:05 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:06.847 00:27:06.847 22:35:05 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:06.847 22:35:05 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:06.847 22:35:05 -- bdevperf/common.sh@19 -- # echo 00:27:06.847 22:35:05 -- bdevperf/common.sh@20 -- # cat 00:27:06.847 22:35:05 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:27:06.847 22:35:05 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:06.847 22:35:05 -- bdevperf/common.sh@9 -- # local rw=write 00:27:06.847 22:35:05 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:06.847 22:35:05 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:06.847 00:27:06.847 22:35:05 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:06.847 22:35:05 -- bdevperf/common.sh@19 -- # echo 00:27:06.847 22:35:05 -- bdevperf/common.sh@20 -- # cat 00:27:06.847 22:35:05 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-14 22:35:05.647707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.102 [2024-07-14 22:35:05.647974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136413 ] 00:27:11.102 Using job config with 3 jobs 00:27:11.102 [2024-07-14 22:35:05.814823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.102 [2024-07-14 22:35:05.988217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.102 cpumask for '\''job0'\'' is too big 00:27:11.102 cpumask for '\''job1'\'' is too big 00:27:11.102 cpumask for '\''job2'\'' is too big 00:27:11.102 Running I/O for 2 seconds... 00:27:11.102 00:27:11.102 Latency(us) 00:27:11.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39857.42 38.92 0.00 0.00 6416.59 1750.11 11021.96 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39831.01 38.90 0.00 0.00 6406.77 1705.43 9294.20 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39893.44 38.96 0.00 0.00 6383.83 673.98 7864.32 00:27:11.102 =================================================================================================================== 00:27:11.102 Total : 119581.87 116.78 0.00 0.00 6402.38 673.98 11021.96' 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-14 22:35:05.647707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.102 [2024-07-14 22:35:05.647974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136413 ] 00:27:11.102 Using job config with 3 jobs 00:27:11.102 [2024-07-14 22:35:05.814823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.102 [2024-07-14 22:35:05.988217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.102 cpumask for '\''job0'\'' is too big 00:27:11.102 cpumask for '\''job1'\'' is too big 00:27:11.102 cpumask for '\''job2'\'' is too big 00:27:11.102 Running I/O for 2 seconds... 00:27:11.102 00:27:11.102 Latency(us) 00:27:11.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39857.42 38.92 0.00 0.00 6416.59 1750.11 11021.96 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39831.01 38.90 0.00 0.00 6406.77 1705.43 9294.20 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39893.44 38.96 0.00 0.00 6383.83 673.98 7864.32 00:27:11.102 =================================================================================================================== 00:27:11.102 Total : 119581.87 116.78 0.00 0.00 6402.38 673.98 11021.96' 00:27:11.102 22:35:09 -- bdevperf/common.sh@32 -- # echo '[2024-07-14 22:35:05.647707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.102 [2024-07-14 22:35:05.647974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136413 ] 00:27:11.102 Using job config with 3 jobs 00:27:11.102 [2024-07-14 22:35:05.814823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.102 [2024-07-14 22:35:05.988217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.102 cpumask for '\''job0'\'' is too big 00:27:11.102 cpumask for '\''job1'\'' is too big 00:27:11.102 cpumask for '\''job2'\'' is too big 00:27:11.102 Running I/O for 2 seconds... 00:27:11.102 00:27:11.102 Latency(us) 00:27:11.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39857.42 38.92 0.00 0.00 6416.59 1750.11 11021.96 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39831.01 38.90 0.00 0.00 6406.77 1705.43 9294.20 00:27:11.102 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:11.102 Malloc0 : 2.01 39893.44 38.96 0.00 0.00 6383.83 673.98 7864.32 00:27:11.102 =================================================================================================================== 00:27:11.102 Total : 119581.87 116.78 0.00 0.00 6402.38 673.98 11021.96' 00:27:11.102 22:35:09 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:11.102 22:35:09 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@35 -- # cleanup 00:27:11.102 22:35:09 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:27:11.102 22:35:09 -- bdevperf/common.sh@8 -- # local job_section=global 00:27:11.102 22:35:09 -- bdevperf/common.sh@9 -- # local rw=rw 00:27:11.102 22:35:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:27:11.102 22:35:09 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:11.102 22:35:09 -- bdevperf/common.sh@13 -- # cat 00:27:11.102 22:35:09 -- bdevperf/common.sh@18 -- # job='[global]' 00:27:11.102 22:35:09 -- bdevperf/common.sh@19 -- # echo 00:27:11.102 00:27:11.102 22:35:09 -- bdevperf/common.sh@20 -- # cat 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@38 -- # create_job job0 00:27:11.102 22:35:09 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:11.102 22:35:09 -- bdevperf/common.sh@9 -- # local rw= 00:27:11.102 22:35:09 -- bdevperf/common.sh@10 -- # local filename= 00:27:11.102 22:35:09 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:11.102 00:27:11.102 22:35:09 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:11.102 22:35:09 -- bdevperf/common.sh@19 -- # echo 00:27:11.102 22:35:09 -- bdevperf/common.sh@20 -- # cat 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@39 -- # create_job job1 00:27:11.102 22:35:09 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:11.102 22:35:09 -- bdevperf/common.sh@9 -- # local rw= 00:27:11.102 22:35:09 -- bdevperf/common.sh@10 -- # local filename= 00:27:11.102 22:35:09 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:11.102 22:35:09 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:11.102 22:35:09 -- bdevperf/common.sh@19 -- # echo 00:27:11.102 00:27:11.102 22:35:09 -- bdevperf/common.sh@20 -- # cat 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@40 -- # create_job job2 00:27:11.102 22:35:09 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:11.102 22:35:09 -- bdevperf/common.sh@9 -- # local rw= 00:27:11.102 22:35:09 -- bdevperf/common.sh@10 -- # local filename= 00:27:11.102 22:35:09 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:11.102 22:35:09 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:11.102 00:27:11.102 22:35:09 -- bdevperf/common.sh@19 -- # echo 00:27:11.102 22:35:09 -- bdevperf/common.sh@20 -- # cat 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@41 -- # create_job job3 00:27:11.102 22:35:09 -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:11.102 22:35:09 -- bdevperf/common.sh@9 -- # local rw= 00:27:11.102 22:35:09 -- bdevperf/common.sh@10 -- # local filename= 00:27:11.102 22:35:09 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:11.102 22:35:09 -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:11.102 00:27:11.102 22:35:09 -- bdevperf/common.sh@19 -- # echo 00:27:11.102 22:35:09 -- bdevperf/common.sh@20 -- # cat 00:27:11.102 22:35:09 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:15.304 22:35:13 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-14 22:35:09.633358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:15.304 [2024-07-14 22:35:09.633560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136486 ] 00:27:15.304 Using job config with 4 jobs 00:27:15.304 [2024-07-14 22:35:09.802920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.304 [2024-07-14 22:35:09.977960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.304 cpumask for '\''job0'\'' is too big 00:27:15.304 cpumask for '\''job1'\'' is too big 00:27:15.304 cpumask for '\''job2'\'' is too big 00:27:15.304 cpumask for '\''job3'\'' is too big 00:27:15.304 Running I/O for 2 seconds... 00:27:15.304 00:27:15.304 Latency(us) 00:27:15.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.03 16134.34 15.76 0.00 0.00 15864.86 2904.44 23950.43 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.03 16123.53 15.75 0.00 0.00 15864.55 3410.85 23950.43 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.03 16113.11 15.74 0.00 0.00 15832.42 2785.28 21924.77 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.03 16102.53 15.73 0.00 0.00 15833.94 3336.38 21924.77 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.04 16092.29 15.72 0.00 0.00 15798.56 2815.07 21924.77 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.04 16081.53 15.70 0.00 0.00 15800.41 3351.27 21924.77 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.04 16071.30 15.69 0.00 0.00 15769.54 2844.86 21686.46 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.04 16060.78 15.68 0.00 0.00 15771.26 3336.38 21924.77 00:27:15.304 =================================================================================================================== 00:27:15.304 Total : 128779.41 125.76 0.00 0.00 15816.94 2785.28 23950.43' 00:27:15.304 22:35:13 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-14 22:35:09.633358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:15.304 [2024-07-14 22:35:09.633560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136486 ] 00:27:15.304 Using job config with 4 jobs 00:27:15.304 [2024-07-14 22:35:09.802920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.304 [2024-07-14 22:35:09.977960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.304 cpumask for '\''job0'\'' is too big 00:27:15.304 cpumask for '\''job1'\'' is too big 00:27:15.304 cpumask for '\''job2'\'' is too big 00:27:15.304 cpumask for '\''job3'\'' is too big 00:27:15.304 Running I/O for 2 seconds... 00:27:15.304 00:27:15.304 Latency(us) 00:27:15.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.03 16134.34 15.76 0.00 0.00 15864.86 2904.44 23950.43 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.03 16123.53 15.75 0.00 0.00 15864.55 3410.85 23950.43 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.03 16113.11 15.74 0.00 0.00 15832.42 2785.28 21924.77 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.03 16102.53 15.73 0.00 0.00 15833.94 3336.38 21924.77 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.04 16092.29 15.72 0.00 0.00 15798.56 2815.07 21924.77 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.04 16081.53 15.70 0.00 0.00 15800.41 3351.27 21924.77 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.04 16071.30 15.69 0.00 0.00 15769.54 2844.86 21686.46 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc1 : 2.04 16060.78 15.68 0.00 0.00 15771.26 3336.38 21924.77 00:27:15.304 =================================================================================================================== 00:27:15.304 Total : 128779.41 125.76 0.00 0.00 15816.94 2785.28 23950.43' 00:27:15.304 22:35:13 -- bdevperf/common.sh@32 -- # echo '[2024-07-14 22:35:09.633358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:15.304 [2024-07-14 22:35:09.633560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136486 ] 00:27:15.304 Using job config with 4 jobs 00:27:15.304 [2024-07-14 22:35:09.802920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.304 [2024-07-14 22:35:09.977960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.304 cpumask for '\''job0'\'' is too big 00:27:15.304 cpumask for '\''job1'\'' is too big 00:27:15.304 cpumask for '\''job2'\'' is too big 00:27:15.304 cpumask for '\''job3'\'' is too big 00:27:15.304 Running I/O for 2 seconds... 00:27:15.304 00:27:15.304 Latency(us) 00:27:15.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.304 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.304 Malloc0 : 2.03 16134.34 15.76 0.00 0.00 15864.86 2904.44 23950.43 00:27:15.304 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc1 : 2.03 16123.53 15.75 0.00 0.00 15864.55 3410.85 23950.43 00:27:15.305 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc0 : 2.03 16113.11 15.74 0.00 0.00 15832.42 2785.28 21924.77 00:27:15.305 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc1 : 2.03 16102.53 15.73 0.00 0.00 15833.94 3336.38 21924.77 00:27:15.305 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc0 : 2.04 16092.29 15.72 0.00 0.00 15798.56 2815.07 21924.77 00:27:15.305 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc1 : 2.04 16081.53 15.70 0.00 0.00 15800.41 3351.27 21924.77 00:27:15.305 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc0 : 2.04 16071.30 15.69 0.00 0.00 15769.54 2844.86 21686.46 00:27:15.305 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:15.305 Malloc1 : 2.04 16060.78 15.68 0.00 0.00 15771.26 3336.38 21924.77 00:27:15.305 =================================================================================================================== 00:27:15.305 Total : 128779.41 125.76 0.00 0.00 15816.94 2785.28 23950.43' 00:27:15.305 22:35:13 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:15.305 22:35:13 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:15.305 22:35:13 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:27:15.305 22:35:13 -- bdevperf/test_config.sh@44 -- # cleanup 00:27:15.305 22:35:13 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:15.305 22:35:13 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:15.305 00:27:15.305 real 0m16.077s 00:27:15.305 user 0m14.354s 00:27:15.305 sys 0m1.154s 00:27:15.305 22:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.305 22:35:13 -- common/autotest_common.sh@10 -- # set +x 00:27:15.305 ************************************ 00:27:15.305 END TEST bdevperf_config 00:27:15.305 ************************************ 00:27:15.305 22:35:13 -- spdk/autotest.sh@198 -- # uname -s 00:27:15.305 22:35:13 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:27:15.305 22:35:13 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:15.305 22:35:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:15.305 22:35:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:15.305 22:35:13 -- common/autotest_common.sh@10 -- # set +x 00:27:15.305 ************************************ 00:27:15.305 START TEST reactor_set_interrupt 00:27:15.305 ************************************ 00:27:15.305 22:35:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:15.305 * Looking for test storage... 00:27:15.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.305 22:35:13 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:27:15.305 22:35:13 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:15.305 22:35:13 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.305 22:35:13 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.305 22:35:13 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:27:15.305 22:35:13 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:15.305 22:35:13 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:15.305 22:35:13 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:15.305 22:35:13 -- common/autotest_common.sh@34 -- # set -e 00:27:15.305 22:35:13 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:15.305 22:35:13 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:15.305 22:35:13 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:15.305 22:35:13 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:15.305 22:35:13 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:15.305 22:35:13 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:27:15.305 22:35:13 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:27:15.305 22:35:13 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:27:15.305 22:35:13 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:27:15.305 22:35:13 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:27:15.305 22:35:13 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:27:15.305 22:35:13 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:27:15.305 22:35:13 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:27:15.305 22:35:13 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:27:15.305 22:35:13 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:27:15.305 22:35:13 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:27:15.305 22:35:13 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:27:15.305 22:35:13 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:27:15.305 22:35:13 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:27:15.305 22:35:13 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:27:15.305 22:35:13 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:27:15.305 22:35:13 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:27:15.305 22:35:13 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:27:15.305 22:35:13 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:27:15.305 22:35:13 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:27:15.305 22:35:13 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:15.305 22:35:13 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:27:15.305 22:35:13 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:27:15.305 22:35:13 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:27:15.305 22:35:13 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:27:15.305 22:35:13 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:27:15.305 22:35:13 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:15.305 22:35:13 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:27:15.305 22:35:13 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:27:15.305 22:35:13 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:27:15.305 22:35:13 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:27:15.305 22:35:13 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:27:15.305 22:35:13 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:27:15.305 22:35:13 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:27:15.305 22:35:13 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:27:15.305 22:35:13 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:15.305 22:35:13 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:27:15.305 22:35:13 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:27:15.305 22:35:13 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:27:15.305 22:35:13 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:27:15.305 22:35:13 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:27:15.305 22:35:13 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:27:15.305 22:35:13 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:27:15.305 22:35:13 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:15.305 22:35:13 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:27:15.305 22:35:13 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:27:15.305 22:35:13 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:27:15.305 22:35:13 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:27:15.305 22:35:13 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:27:15.305 22:35:13 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:15.305 22:35:13 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:27:15.305 22:35:13 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:27:15.305 22:35:13 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:27:15.305 22:35:13 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:27:15.305 22:35:13 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:27:15.305 22:35:13 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:27:15.305 22:35:13 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:27:15.305 22:35:13 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:15.305 22:35:13 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:15.305 22:35:13 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:27:15.305 22:35:13 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:27:15.305 22:35:13 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:27:15.305 22:35:13 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:27:15.305 22:35:13 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:27:15.305 22:35:13 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:27:15.305 22:35:13 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:27:15.305 22:35:13 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:15.305 22:35:13 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:27:15.305 22:35:13 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:27:15.306 22:35:13 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:27:15.306 22:35:13 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:27:15.306 22:35:13 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:27:15.306 22:35:13 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:15.306 22:35:13 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:27:15.306 22:35:13 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:27:15.306 22:35:13 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:27:15.306 22:35:13 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:27:15.306 22:35:13 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:27:15.306 22:35:13 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:15.306 22:35:13 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:15.306 22:35:13 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:15.306 22:35:13 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:15.306 22:35:13 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:15.306 22:35:13 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:15.306 22:35:13 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:15.306 22:35:13 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:15.306 22:35:13 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:15.306 22:35:13 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:15.306 22:35:13 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:15.306 22:35:13 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:15.306 22:35:13 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:15.306 22:35:13 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:15.306 22:35:13 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:15.306 22:35:13 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:15.306 #define SPDK_CONFIG_H 00:27:15.306 #define SPDK_CONFIG_APPS 1 00:27:15.306 #define SPDK_CONFIG_ARCH native 00:27:15.306 #define SPDK_CONFIG_ASAN 1 00:27:15.306 #undef SPDK_CONFIG_AVAHI 00:27:15.306 #undef SPDK_CONFIG_CET 00:27:15.306 #define SPDK_CONFIG_COVERAGE 1 00:27:15.306 #define SPDK_CONFIG_CROSS_PREFIX 00:27:15.306 #undef SPDK_CONFIG_CRYPTO 00:27:15.306 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:15.306 #undef SPDK_CONFIG_CUSTOMOCF 00:27:15.306 #undef SPDK_CONFIG_DAOS 00:27:15.306 #define SPDK_CONFIG_DAOS_DIR 00:27:15.306 #define SPDK_CONFIG_DEBUG 1 00:27:15.306 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:15.306 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:15.306 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:15.306 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:15.306 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:15.306 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:15.306 #define SPDK_CONFIG_EXAMPLES 1 00:27:15.306 #undef SPDK_CONFIG_FC 00:27:15.306 #define SPDK_CONFIG_FC_PATH 00:27:15.306 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:15.306 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:15.306 #undef SPDK_CONFIG_FUSE 00:27:15.306 #undef SPDK_CONFIG_FUZZER 00:27:15.306 #define SPDK_CONFIG_FUZZER_LIB 00:27:15.306 #undef SPDK_CONFIG_GOLANG 00:27:15.306 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:27:15.306 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:15.306 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:15.306 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:15.306 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:15.306 #define SPDK_CONFIG_IDXD 1 00:27:15.306 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:15.306 #undef SPDK_CONFIG_IPSEC_MB 00:27:15.306 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:15.306 #define SPDK_CONFIG_ISAL 1 00:27:15.306 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:15.306 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:15.306 #define SPDK_CONFIG_LIBDIR 00:27:15.306 #undef SPDK_CONFIG_LTO 00:27:15.306 #define SPDK_CONFIG_MAX_LCORES 00:27:15.306 #define SPDK_CONFIG_NVME_CUSE 1 00:27:15.306 #undef SPDK_CONFIG_OCF 00:27:15.306 #define SPDK_CONFIG_OCF_PATH 00:27:15.306 #define SPDK_CONFIG_OPENSSL_PATH 00:27:15.306 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:15.306 #undef SPDK_CONFIG_PGO_USE 00:27:15.306 #define SPDK_CONFIG_PREFIX /usr/local 00:27:15.306 #define SPDK_CONFIG_RAID5F 1 00:27:15.306 #undef SPDK_CONFIG_RBD 00:27:15.306 #define SPDK_CONFIG_RDMA 1 00:27:15.306 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:15.306 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:15.306 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:15.306 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:15.306 #undef SPDK_CONFIG_SHARED 00:27:15.306 #undef SPDK_CONFIG_SMA 00:27:15.306 #define SPDK_CONFIG_TESTS 1 00:27:15.306 #undef SPDK_CONFIG_TSAN 00:27:15.306 #undef SPDK_CONFIG_UBLK 00:27:15.306 #define SPDK_CONFIG_UBSAN 1 00:27:15.306 #define SPDK_CONFIG_UNIT_TESTS 1 00:27:15.306 #undef SPDK_CONFIG_URING 00:27:15.306 #define SPDK_CONFIG_URING_PATH 00:27:15.306 #undef SPDK_CONFIG_URING_ZNS 00:27:15.306 #undef SPDK_CONFIG_USDT 00:27:15.306 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:15.306 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:15.306 #undef SPDK_CONFIG_VFIO_USER 00:27:15.306 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:15.306 #define SPDK_CONFIG_VHOST 1 00:27:15.306 #define SPDK_CONFIG_VIRTIO 1 00:27:15.306 #undef SPDK_CONFIG_VTUNE 00:27:15.306 #define SPDK_CONFIG_VTUNE_DIR 00:27:15.306 #define SPDK_CONFIG_WERROR 1 00:27:15.306 #define SPDK_CONFIG_WPDK_DIR 00:27:15.306 #undef SPDK_CONFIG_XNVME 00:27:15.306 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:15.306 22:35:13 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:15.306 22:35:13 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:15.306 22:35:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.306 22:35:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.306 22:35:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.306 22:35:13 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:15.306 22:35:13 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:15.306 22:35:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:15.306 22:35:13 -- paths/export.sh@5 -- # export PATH 00:27:15.306 22:35:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:15.306 22:35:13 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:15.306 22:35:13 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:15.306 22:35:13 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:15.306 22:35:13 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:15.306 22:35:13 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:15.306 22:35:13 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:15.307 22:35:13 -- pm/common@16 -- # TEST_TAG=N/A 00:27:15.307 22:35:13 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:15.307 22:35:13 -- common/autotest_common.sh@52 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:27:15.307 22:35:13 -- common/autotest_common.sh@56 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:15.307 22:35:13 -- common/autotest_common.sh@58 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:27:15.307 22:35:13 -- common/autotest_common.sh@60 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:15.307 22:35:13 -- common/autotest_common.sh@62 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:27:15.307 22:35:13 -- common/autotest_common.sh@64 -- # : 00:27:15.307 22:35:13 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:27:15.307 22:35:13 -- common/autotest_common.sh@66 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:27:15.307 22:35:13 -- common/autotest_common.sh@68 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:27:15.307 22:35:13 -- common/autotest_common.sh@70 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:27:15.307 22:35:13 -- common/autotest_common.sh@72 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:15.307 22:35:13 -- common/autotest_common.sh@74 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:27:15.307 22:35:13 -- common/autotest_common.sh@76 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:27:15.307 22:35:13 -- common/autotest_common.sh@78 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:27:15.307 22:35:13 -- common/autotest_common.sh@80 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:27:15.307 22:35:13 -- common/autotest_common.sh@82 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:27:15.307 22:35:13 -- common/autotest_common.sh@84 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:27:15.307 22:35:13 -- common/autotest_common.sh@86 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:27:15.307 22:35:13 -- common/autotest_common.sh@88 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:27:15.307 22:35:13 -- common/autotest_common.sh@90 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:15.307 22:35:13 -- common/autotest_common.sh@92 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:27:15.307 22:35:13 -- common/autotest_common.sh@94 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:27:15.307 22:35:13 -- common/autotest_common.sh@96 -- # : rdma 00:27:15.307 22:35:13 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:15.307 22:35:13 -- common/autotest_common.sh@98 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:27:15.307 22:35:13 -- common/autotest_common.sh@100 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:27:15.307 22:35:13 -- common/autotest_common.sh@102 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:27:15.307 22:35:13 -- common/autotest_common.sh@104 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:27:15.307 22:35:13 -- common/autotest_common.sh@106 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:27:15.307 22:35:13 -- common/autotest_common.sh@108 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:27:15.307 22:35:13 -- common/autotest_common.sh@110 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:27:15.307 22:35:13 -- common/autotest_common.sh@112 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:15.307 22:35:13 -- common/autotest_common.sh@114 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:27:15.307 22:35:13 -- common/autotest_common.sh@116 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:27:15.307 22:35:13 -- common/autotest_common.sh@118 -- # : 00:27:15.307 22:35:13 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:15.307 22:35:13 -- common/autotest_common.sh@120 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:27:15.307 22:35:13 -- common/autotest_common.sh@122 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:27:15.307 22:35:13 -- common/autotest_common.sh@124 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:27:15.307 22:35:13 -- common/autotest_common.sh@126 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:27:15.307 22:35:13 -- common/autotest_common.sh@128 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:27:15.307 22:35:13 -- common/autotest_common.sh@130 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:27:15.307 22:35:13 -- common/autotest_common.sh@132 -- # : 00:27:15.307 22:35:13 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:27:15.307 22:35:13 -- common/autotest_common.sh@134 -- # : true 00:27:15.307 22:35:13 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:27:15.307 22:35:13 -- common/autotest_common.sh@136 -- # : 1 00:27:15.307 22:35:13 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:27:15.307 22:35:13 -- common/autotest_common.sh@138 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:27:15.307 22:35:13 -- common/autotest_common.sh@140 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:27:15.307 22:35:13 -- common/autotest_common.sh@142 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:27:15.307 22:35:13 -- common/autotest_common.sh@144 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:27:15.307 22:35:13 -- common/autotest_common.sh@146 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:27:15.307 22:35:13 -- common/autotest_common.sh@148 -- # : 00:27:15.307 22:35:13 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:27:15.307 22:35:13 -- common/autotest_common.sh@150 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:27:15.307 22:35:13 -- common/autotest_common.sh@152 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:27:15.307 22:35:13 -- common/autotest_common.sh@154 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:27:15.307 22:35:13 -- common/autotest_common.sh@156 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:27:15.307 22:35:13 -- common/autotest_common.sh@158 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:27:15.307 22:35:13 -- common/autotest_common.sh@160 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:27:15.307 22:35:13 -- common/autotest_common.sh@163 -- # : 00:27:15.307 22:35:13 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:27:15.307 22:35:13 -- common/autotest_common.sh@165 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:27:15.307 22:35:13 -- common/autotest_common.sh@167 -- # : 0 00:27:15.307 22:35:13 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:15.307 22:35:13 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:15.307 22:35:13 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:15.307 22:35:13 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:15.307 22:35:13 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:15.307 22:35:13 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:15.307 22:35:13 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:15.307 22:35:13 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:15.308 22:35:13 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:15.308 22:35:13 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:15.308 22:35:13 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:15.308 22:35:13 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:15.308 22:35:13 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:15.308 22:35:13 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:15.308 22:35:13 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:27:15.308 22:35:13 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:15.308 22:35:13 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:15.308 22:35:13 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:15.308 22:35:13 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:15.308 22:35:13 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:15.308 22:35:13 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:27:15.308 22:35:13 -- common/autotest_common.sh@196 -- # cat 00:27:15.308 22:35:13 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:27:15.308 22:35:13 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:15.308 22:35:13 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:15.308 22:35:13 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:15.308 22:35:13 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:15.308 22:35:13 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:27:15.308 22:35:13 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:27:15.308 22:35:13 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:15.308 22:35:13 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:15.308 22:35:13 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:15.308 22:35:13 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:15.308 22:35:13 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:27:15.308 22:35:13 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:27:15.308 22:35:13 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:15.308 22:35:13 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:15.308 22:35:13 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:15.308 22:35:13 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:15.308 22:35:13 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:15.308 22:35:13 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:15.308 22:35:13 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:27:15.308 22:35:13 -- common/autotest_common.sh@249 -- # export valgrind= 00:27:15.308 22:35:13 -- common/autotest_common.sh@249 -- # valgrind= 00:27:15.308 22:35:13 -- common/autotest_common.sh@255 -- # uname -s 00:27:15.308 22:35:13 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:27:15.308 22:35:13 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:27:15.308 22:35:13 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:27:15.308 22:35:13 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:27:15.308 22:35:13 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:15.308 22:35:13 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:15.308 22:35:13 -- common/autotest_common.sh@265 -- # MAKE=make 00:27:15.308 22:35:13 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:27:15.308 22:35:13 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:27:15.308 22:35:13 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:27:15.308 22:35:13 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:15.308 22:35:13 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:27:15.308 22:35:13 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:27:15.308 22:35:13 -- common/autotest_common.sh@309 -- # [[ -z 136573 ]] 00:27:15.308 22:35:13 -- common/autotest_common.sh@309 -- # kill -0 136573 00:27:15.308 22:35:13 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:27:15.308 22:35:13 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:27:15.308 22:35:13 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:27:15.308 22:35:13 -- common/autotest_common.sh@322 -- # local mount target_dir 00:27:15.308 22:35:13 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:27:15.308 22:35:13 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:27:15.308 22:35:13 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:27:15.308 22:35:13 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:27:15.308 22:35:13 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.rqt57z 00:27:15.308 22:35:13 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:15.308 22:35:13 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:27:15.308 22:35:13 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:27:15.308 22:35:13 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.rqt57z/tests/interrupt /tmp/spdk.rqt57z 00:27:15.308 22:35:13 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@318 -- # df -T 00:27:15.308 22:35:13 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224465920 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224465920 00:27:15.308 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:27:15.308 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=10616217600 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:27:15.308 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=9983799296 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269972480 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:27:15.308 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:27:15.308 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272565248 00:27:15.308 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:27:15.308 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:15.308 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.308 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=97899765760 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=1803014144 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:27:15.309 22:35:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:15.309 22:35:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:27:15.309 22:35:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:27:15.309 22:35:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:15.309 22:35:13 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:27:15.309 * Looking for test storage... 00:27:15.309 22:35:13 -- common/autotest_common.sh@359 -- # local target_space new_size 00:27:15.309 22:35:13 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:27:15.309 22:35:13 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.309 22:35:13 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:15.309 22:35:13 -- common/autotest_common.sh@363 -- # mount=/ 00:27:15.309 22:35:13 -- common/autotest_common.sh@365 -- # target_space=10616217600 00:27:15.309 22:35:13 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:27:15.309 22:35:13 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:27:15.309 22:35:13 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:27:15.309 22:35:13 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:27:15.309 22:35:13 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:27:15.309 22:35:13 -- common/autotest_common.sh@372 -- # new_size=12198391808 00:27:15.309 22:35:13 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:15.309 22:35:13 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.309 22:35:13 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.309 22:35:13 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:15.309 22:35:13 -- common/autotest_common.sh@380 -- # return 0 00:27:15.309 22:35:13 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:27:15.309 22:35:13 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:27:15.309 22:35:13 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:15.309 22:35:13 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:15.309 22:35:13 -- common/autotest_common.sh@1672 -- # true 00:27:15.309 22:35:13 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:27:15.309 22:35:13 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:15.309 22:35:13 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:15.309 22:35:13 -- common/autotest_common.sh@27 -- # exec 00:27:15.309 22:35:13 -- common/autotest_common.sh@29 -- # exec 00:27:15.309 22:35:13 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:15.309 22:35:13 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:15.309 22:35:13 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:15.309 22:35:13 -- common/autotest_common.sh@18 -- # set -x 00:27:15.309 22:35:13 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:15.309 22:35:13 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:27:15.309 22:35:13 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:27:15.309 22:35:13 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:27:15.309 22:35:13 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:15.310 22:35:13 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:15.310 22:35:13 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:15.310 22:35:13 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136613 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136613 /var/tmp/spdk.sock 00:27:15.310 22:35:13 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:15.310 22:35:13 -- common/autotest_common.sh@819 -- # '[' -z 136613 ']' 00:27:15.310 22:35:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.310 22:35:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:15.310 22:35:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.310 22:35:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:15.310 22:35:13 -- common/autotest_common.sh@10 -- # set +x 00:27:15.310 [2024-07-14 22:35:13.830574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:15.310 [2024-07-14 22:35:13.830782] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136613 ] 00:27:15.310 [2024-07-14 22:35:14.007480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:15.310 [2024-07-14 22:35:14.171619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.310 [2024-07-14 22:35:14.171746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.310 [2024-07-14 22:35:14.172048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.568 [2024-07-14 22:35:14.424725] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:15.826 22:35:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:15.826 22:35:14 -- common/autotest_common.sh@852 -- # return 0 00:27:15.826 22:35:14 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:27:15.826 22:35:14 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.088 Malloc0 00:27:16.088 Malloc1 00:27:16.088 Malloc2 00:27:16.088 22:35:15 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:27:16.088 22:35:15 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:16.088 22:35:15 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:16.088 22:35:15 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:16.088 5000+0 records in 00:27:16.088 5000+0 records out 00:27:16.088 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0248029 s, 413 MB/s 00:27:16.088 22:35:15 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:16.347 AIO0 00:27:16.347 22:35:15 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 136613 00:27:16.347 22:35:15 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 136613 without_thd 00:27:16.347 22:35:15 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136613 00:27:16.347 22:35:15 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:27:16.347 22:35:15 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:16.347 22:35:15 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:16.347 22:35:15 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:27:16.347 22:35:15 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:16.347 22:35:15 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:27:16.347 22:35:15 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:16.347 22:35:15 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:16.347 22:35:15 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:27:16.606 22:35:15 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:16.606 22:35:15 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:16.606 22:35:15 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:16.864 22:35:15 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:27:16.864 22:35:15 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:16.864 22:35:15 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:16.864 spdk_thread ids are 1 on reactor0. 00:27:16.864 22:35:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:16.864 22:35:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136613 0 00:27:16.864 22:35:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136613 0 idle 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136613 root 20 0 20.1t 143076 28824 S 0.0 1.2 0:00.63 reactor_0' 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@48 -- # echo 136613 root 20 0 20.1t 143076 28824 S 0.0 1.2 0:00.63 reactor_0 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:16.865 22:35:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:17.124 22:35:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:17.124 22:35:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136613 1 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136613 1 idle 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:17.124 22:35:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136616 root 20 0 20.1t 143076 28824 S 0.0 1.2 0:00.00 reactor_1' 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@48 -- # echo 136616 root 20 0 20.1t 143076 28824 S 0.0 1.2 0:00.00 reactor_1 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:17.124 22:35:16 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:17.124 22:35:16 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136613 2 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136613 2 idle 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:17.124 22:35:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136617 root 20 0 20.1t 143076 28824 S 0.0 1.2 0:00.00 reactor_2' 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@48 -- # echo 136617 root 20 0 20.1t 143076 28824 S 0.0 1.2 0:00.00 reactor_2 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:17.383 22:35:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:17.383 22:35:16 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:27:17.383 22:35:16 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:27:17.383 22:35:16 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:27:17.642 [2024-07-14 22:35:16.525196] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:17.642 22:35:16 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:17.901 [2024-07-14 22:35:16.753157] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:17.901 [2024-07-14 22:35:16.753882] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:17.901 22:35:16 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:17.901 [2024-07-14 22:35:16.961042] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:17.901 [2024-07-14 22:35:16.961743] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:18.159 22:35:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:18.159 22:35:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136613 0 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136613 0 busy 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:18.159 22:35:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136613 root 20 0 20.1t 143188 28824 R 99.9 1.2 0:01.03 reactor_0' 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@48 -- # echo 136613 root 20 0 20.1t 143188 28824 R 99.9 1.2 0:01.03 reactor_0 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:18.159 22:35:17 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:18.159 22:35:17 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136613 2 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136613 2 busy 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:18.159 22:35:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136617 root 20 0 20.1t 143188 28824 R 99.9 1.2 0:00.34 reactor_2' 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@48 -- # echo 136617 root 20 0 20.1t 143188 28824 R 99.9 1.2 0:00.34 reactor_2 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:18.417 22:35:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:18.417 22:35:17 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:18.676 [2024-07-14 22:35:17.553057] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:18.676 [2024-07-14 22:35:17.553704] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:18.676 22:35:17 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:27:18.676 22:35:17 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136613 2 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136613 2 idle 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136617 root 20 0 20.1t 143252 28824 S 0.0 1.2 0:00.59 reactor_2' 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@48 -- # echo 136617 root 20 0 20.1t 143252 28824 S 0.0 1.2 0:00.59 reactor_2 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:18.676 22:35:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:18.676 22:35:17 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:18.935 [2024-07-14 22:35:17.992940] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:18.935 [2024-07-14 22:35:17.993422] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:19.193 22:35:18 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:27:19.193 22:35:18 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:27:19.193 22:35:18 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:27:19.193 [2024-07-14 22:35:18.229330] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:19.193 22:35:18 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136613 0 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136613 0 idle 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@33 -- # local pid=136613 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136613 -w 256 00:27:19.193 22:35:18 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136613 root 20 0 20.1t 143344 28824 S 0.0 1.2 0:01.89 reactor_0' 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@48 -- # echo 136613 root 20 0 20.1t 143344 28824 S 0.0 1.2 0:01.89 reactor_0 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:19.451 22:35:18 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:19.451 22:35:18 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:27:19.451 22:35:18 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:27:19.451 22:35:18 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:27:19.451 22:35:18 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 136613 00:27:19.451 22:35:18 -- common/autotest_common.sh@926 -- # '[' -z 136613 ']' 00:27:19.451 22:35:18 -- common/autotest_common.sh@930 -- # kill -0 136613 00:27:19.451 22:35:18 -- common/autotest_common.sh@931 -- # uname 00:27:19.451 22:35:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:19.451 22:35:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136613 00:27:19.451 22:35:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:19.452 22:35:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:19.452 22:35:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136613' 00:27:19.452 killing process with pid 136613 00:27:19.452 22:35:18 -- common/autotest_common.sh@945 -- # kill 136613 00:27:19.452 22:35:18 -- common/autotest_common.sh@950 -- # wait 136613 00:27:20.827 22:35:19 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:20.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.827 22:35:19 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136787 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:20.827 22:35:19 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136787 /var/tmp/spdk.sock 00:27:20.827 22:35:19 -- common/autotest_common.sh@819 -- # '[' -z 136787 ']' 00:27:20.827 22:35:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.827 22:35:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:20.827 22:35:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.827 22:35:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:20.827 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:27:20.827 [2024-07-14 22:35:19.635354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:20.827 [2024-07-14 22:35:19.635710] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136787 ] 00:27:20.827 [2024-07-14 22:35:19.814058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:21.086 [2024-07-14 22:35:20.019636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.086 [2024-07-14 22:35:20.019777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.086 [2024-07-14 22:35:20.019790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.345 [2024-07-14 22:35:20.308598] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:21.602 22:35:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:21.602 22:35:20 -- common/autotest_common.sh@852 -- # return 0 00:27:21.602 22:35:20 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:27:21.602 22:35:20 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.861 Malloc0 00:27:21.861 Malloc1 00:27:21.861 Malloc2 00:27:21.861 22:35:20 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:27:21.861 22:35:20 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:21.861 22:35:20 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:21.861 22:35:20 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:21.861 5000+0 records in 00:27:21.861 5000+0 records out 00:27:21.861 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0256343 s, 399 MB/s 00:27:21.861 22:35:20 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:22.458 AIO0 00:27:22.458 22:35:21 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 136787 00:27:22.458 22:35:21 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 136787 00:27:22.459 22:35:21 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136787 00:27:22.459 22:35:21 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:27:22.459 22:35:21 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:22.459 22:35:21 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:27:22.459 22:35:21 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:22.459 22:35:21 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:22.459 22:35:21 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:27:22.725 spdk_thread ids are 1 on reactor0. 00:27:22.725 22:35:21 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:22.725 22:35:21 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:22.725 22:35:21 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:22.725 22:35:21 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136787 0 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136787 0 idle 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:22.725 22:35:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136787 root 20 0 20.1t 145636 28664 S 0.0 1.2 0:00.75 reactor_0' 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@48 -- # echo 136787 root 20 0 20.1t 145636 28664 S 0.0 1.2 0:00.75 reactor_0 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:22.983 22:35:21 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:22.983 22:35:21 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136787 1 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136787 1 idle 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:22.983 22:35:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136791 root 20 0 20.1t 145636 28664 S 0.0 1.2 0:00.00 reactor_1' 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # echo 136791 root 20 0 20.1t 145636 28664 S 0.0 1.2 0:00.00 reactor_1 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:23.241 22:35:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:23.241 22:35:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136787 2 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136787 2 idle 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136792 root 20 0 20.1t 145636 28664 S 0.0 1.2 0:00.00 reactor_2' 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # echo 136792 root 20 0 20.1t 145636 28664 S 0.0 1.2 0:00.00 reactor_2 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:23.241 22:35:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:23.241 22:35:22 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:27:23.241 22:35:22 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:23.499 [2024-07-14 22:35:22.520743] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:23.499 [2024-07-14 22:35:22.520911] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:27:23.499 [2024-07-14 22:35:22.521833] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:23.499 22:35:22 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:23.757 [2024-07-14 22:35:22.712680] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:23.757 [2024-07-14 22:35:22.713372] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:23.757 22:35:22 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:23.757 22:35:22 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136787 0 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136787 0 busy 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:23.757 22:35:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136787 root 20 0 20.1t 145716 28664 R 99.9 1.2 0:01.12 reactor_0' 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@48 -- # echo 136787 root 20 0 20.1t 145716 28664 R 99.9 1.2 0:01.12 reactor_0 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:24.015 22:35:22 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:24.015 22:35:22 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136787 2 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136787 2 busy 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:24.015 22:35:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:24.016 22:35:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:24.016 22:35:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:24.016 22:35:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136792 root 20 0 20.1t 145716 28664 R 93.3 1.2 0:00.33 reactor_2' 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@48 -- # echo 136792 root 20 0 20.1t 145716 28664 R 93.3 1.2 0:00.33 reactor_2 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:24.016 22:35:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:24.016 22:35:23 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:24.582 [2024-07-14 22:35:23.368967] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:24.582 [2024-07-14 22:35:23.369414] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:24.582 22:35:23 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:27:24.582 22:35:23 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136787 2 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136787 2 idle 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136792 root 20 0 20.1t 145788 28664 S 0.0 1.2 0:00.65 reactor_2' 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@48 -- # echo 136792 root 20 0 20.1t 145788 28664 S 0.0 1.2 0:00.65 reactor_2 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:24.582 22:35:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:24.582 22:35:23 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:24.840 [2024-07-14 22:35:23.793021] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:24.840 [2024-07-14 22:35:23.794139] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:27:24.840 [2024-07-14 22:35:23.794308] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:24.840 22:35:23 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:27:24.840 22:35:23 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136787 0 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136787 0 idle 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@33 -- # local pid=136787 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136787 -w 256 00:27:24.840 22:35:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136787 root 20 0 20.1t 145828 28664 S 6.7 1.2 0:02.04 reactor_0' 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@48 -- # echo 136787 root 20 0 20.1t 145828 28664 S 6.7 1.2 0:02.04 reactor_0 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:27:25.098 22:35:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:25.098 22:35:23 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:27:25.098 22:35:23 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:27:25.098 22:35:23 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:25.098 22:35:23 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 136787 00:27:25.098 22:35:23 -- common/autotest_common.sh@926 -- # '[' -z 136787 ']' 00:27:25.098 22:35:23 -- common/autotest_common.sh@930 -- # kill -0 136787 00:27:25.098 22:35:23 -- common/autotest_common.sh@931 -- # uname 00:27:25.098 22:35:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:25.098 22:35:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136787 00:27:25.098 22:35:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:25.098 22:35:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:25.098 22:35:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136787' 00:27:25.098 killing process with pid 136787 00:27:25.098 22:35:23 -- common/autotest_common.sh@945 -- # kill 136787 00:27:25.098 22:35:23 -- common/autotest_common.sh@950 -- # wait 136787 00:27:26.477 22:35:25 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:26.477 ************************************ 00:27:26.477 END TEST reactor_set_interrupt 00:27:26.477 ************************************ 00:27:26.477 00:27:26.477 real 0m11.620s 00:27:26.477 user 0m12.242s 00:27:26.477 sys 0m1.488s 00:27:26.477 22:35:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.477 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:27:26.477 22:35:25 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:26.477 22:35:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:26.477 22:35:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:26.477 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:27:26.477 ************************************ 00:27:26.477 START TEST reap_unregistered_poller 00:27:26.477 ************************************ 00:27:26.477 22:35:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:26.477 * Looking for test storage... 00:27:26.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.477 22:35:25 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:26.477 22:35:25 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:26.477 22:35:25 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:26.477 22:35:25 -- common/autotest_common.sh@34 -- # set -e 00:27:26.477 22:35:25 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:26.477 22:35:25 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:26.477 22:35:25 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:26.477 22:35:25 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:26.477 22:35:25 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:26.477 22:35:25 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:27:26.478 22:35:25 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:27:26.478 22:35:25 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:27:26.478 22:35:25 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:27:26.478 22:35:25 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:27:26.478 22:35:25 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:27:26.478 22:35:25 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:27:26.478 22:35:25 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:27:26.478 22:35:25 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:27:26.478 22:35:25 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:27:26.478 22:35:25 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:27:26.478 22:35:25 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:27:26.478 22:35:25 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:27:26.478 22:35:25 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:27:26.478 22:35:25 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:27:26.478 22:35:25 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:27:26.478 22:35:25 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:27:26.478 22:35:25 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:27:26.478 22:35:25 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:26.478 22:35:25 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:27:26.478 22:35:25 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:27:26.478 22:35:25 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:27:26.478 22:35:25 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:27:26.478 22:35:25 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:26.478 22:35:25 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:27:26.478 22:35:25 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:27:26.478 22:35:25 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:27:26.478 22:35:25 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:27:26.478 22:35:25 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:27:26.478 22:35:25 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:27:26.478 22:35:25 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:27:26.478 22:35:25 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:26.478 22:35:25 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:27:26.478 22:35:25 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:27:26.478 22:35:25 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:27:26.478 22:35:25 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:27:26.478 22:35:25 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:27:26.478 22:35:25 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:26.478 22:35:25 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:27:26.478 22:35:25 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:27:26.478 22:35:25 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:27:26.478 22:35:25 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:27:26.478 22:35:25 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:27:26.478 22:35:25 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:26.478 22:35:25 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:27:26.478 22:35:25 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:27:26.478 22:35:25 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:27:26.478 22:35:25 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:27:26.478 22:35:25 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:27:26.478 22:35:25 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:27:26.478 22:35:25 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:27:26.478 22:35:25 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:26.478 22:35:25 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:26.478 22:35:25 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:27:26.478 22:35:25 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:27:26.478 22:35:25 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:27:26.478 22:35:25 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:27:26.478 22:35:25 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:27:26.478 22:35:25 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:27:26.478 22:35:25 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:27:26.478 22:35:25 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:26.478 22:35:25 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:27:26.478 22:35:25 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:27:26.478 22:35:25 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:27:26.478 22:35:25 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:27:26.478 22:35:25 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:27:26.478 22:35:25 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:26.478 22:35:25 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:27:26.478 22:35:25 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:27:26.478 22:35:25 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:27:26.478 22:35:25 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:27:26.478 22:35:25 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:27:26.478 22:35:25 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:26.478 22:35:25 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:26.478 22:35:25 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:26.478 22:35:25 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:26.478 22:35:25 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:26.478 22:35:25 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:26.478 22:35:25 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:26.478 22:35:25 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:26.478 22:35:25 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:26.478 22:35:25 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:26.478 22:35:25 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:26.478 22:35:25 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:26.478 22:35:25 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:26.478 22:35:25 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:26.478 22:35:25 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:26.478 22:35:25 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:26.478 #define SPDK_CONFIG_H 00:27:26.478 #define SPDK_CONFIG_APPS 1 00:27:26.478 #define SPDK_CONFIG_ARCH native 00:27:26.478 #define SPDK_CONFIG_ASAN 1 00:27:26.478 #undef SPDK_CONFIG_AVAHI 00:27:26.478 #undef SPDK_CONFIG_CET 00:27:26.478 #define SPDK_CONFIG_COVERAGE 1 00:27:26.478 #define SPDK_CONFIG_CROSS_PREFIX 00:27:26.478 #undef SPDK_CONFIG_CRYPTO 00:27:26.478 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:26.478 #undef SPDK_CONFIG_CUSTOMOCF 00:27:26.478 #undef SPDK_CONFIG_DAOS 00:27:26.478 #define SPDK_CONFIG_DAOS_DIR 00:27:26.478 #define SPDK_CONFIG_DEBUG 1 00:27:26.478 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:26.478 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:26.478 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:26.478 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:26.478 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:26.478 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:26.478 #define SPDK_CONFIG_EXAMPLES 1 00:27:26.478 #undef SPDK_CONFIG_FC 00:27:26.478 #define SPDK_CONFIG_FC_PATH 00:27:26.478 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:26.478 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:26.478 #undef SPDK_CONFIG_FUSE 00:27:26.478 #undef SPDK_CONFIG_FUZZER 00:27:26.478 #define SPDK_CONFIG_FUZZER_LIB 00:27:26.478 #undef SPDK_CONFIG_GOLANG 00:27:26.478 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:27:26.478 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:26.478 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:26.478 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:26.478 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:26.478 #define SPDK_CONFIG_IDXD 1 00:27:26.478 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:26.478 #undef SPDK_CONFIG_IPSEC_MB 00:27:26.478 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:26.478 #define SPDK_CONFIG_ISAL 1 00:27:26.478 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:26.478 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:26.478 #define SPDK_CONFIG_LIBDIR 00:27:26.478 #undef SPDK_CONFIG_LTO 00:27:26.478 #define SPDK_CONFIG_MAX_LCORES 00:27:26.478 #define SPDK_CONFIG_NVME_CUSE 1 00:27:26.478 #undef SPDK_CONFIG_OCF 00:27:26.478 #define SPDK_CONFIG_OCF_PATH 00:27:26.478 #define SPDK_CONFIG_OPENSSL_PATH 00:27:26.478 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:26.478 #undef SPDK_CONFIG_PGO_USE 00:27:26.478 #define SPDK_CONFIG_PREFIX /usr/local 00:27:26.478 #define SPDK_CONFIG_RAID5F 1 00:27:26.478 #undef SPDK_CONFIG_RBD 00:27:26.478 #define SPDK_CONFIG_RDMA 1 00:27:26.478 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:26.478 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:26.478 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:26.478 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:26.478 #undef SPDK_CONFIG_SHARED 00:27:26.478 #undef SPDK_CONFIG_SMA 00:27:26.478 #define SPDK_CONFIG_TESTS 1 00:27:26.478 #undef SPDK_CONFIG_TSAN 00:27:26.478 #undef SPDK_CONFIG_UBLK 00:27:26.478 #define SPDK_CONFIG_UBSAN 1 00:27:26.478 #define SPDK_CONFIG_UNIT_TESTS 1 00:27:26.478 #undef SPDK_CONFIG_URING 00:27:26.478 #define SPDK_CONFIG_URING_PATH 00:27:26.478 #undef SPDK_CONFIG_URING_ZNS 00:27:26.478 #undef SPDK_CONFIG_USDT 00:27:26.478 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:26.478 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:26.478 #undef SPDK_CONFIG_VFIO_USER 00:27:26.478 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:26.478 #define SPDK_CONFIG_VHOST 1 00:27:26.478 #define SPDK_CONFIG_VIRTIO 1 00:27:26.478 #undef SPDK_CONFIG_VTUNE 00:27:26.478 #define SPDK_CONFIG_VTUNE_DIR 00:27:26.478 #define SPDK_CONFIG_WERROR 1 00:27:26.478 #define SPDK_CONFIG_WPDK_DIR 00:27:26.478 #undef SPDK_CONFIG_XNVME 00:27:26.478 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:26.478 22:35:25 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:26.478 22:35:25 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:26.478 22:35:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.478 22:35:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.478 22:35:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.479 22:35:25 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:26.479 22:35:25 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:26.479 22:35:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:26.479 22:35:25 -- paths/export.sh@5 -- # export PATH 00:27:26.479 22:35:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:26.479 22:35:25 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:26.479 22:35:25 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:26.479 22:35:25 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:26.479 22:35:25 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:26.479 22:35:25 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:26.479 22:35:25 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:26.479 22:35:25 -- pm/common@16 -- # TEST_TAG=N/A 00:27:26.479 22:35:25 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:26.479 22:35:25 -- common/autotest_common.sh@52 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:27:26.479 22:35:25 -- common/autotest_common.sh@56 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:26.479 22:35:25 -- common/autotest_common.sh@58 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:27:26.479 22:35:25 -- common/autotest_common.sh@60 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:26.479 22:35:25 -- common/autotest_common.sh@62 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:27:26.479 22:35:25 -- common/autotest_common.sh@64 -- # : 00:27:26.479 22:35:25 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:27:26.479 22:35:25 -- common/autotest_common.sh@66 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:27:26.479 22:35:25 -- common/autotest_common.sh@68 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:27:26.479 22:35:25 -- common/autotest_common.sh@70 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:27:26.479 22:35:25 -- common/autotest_common.sh@72 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:26.479 22:35:25 -- common/autotest_common.sh@74 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:27:26.479 22:35:25 -- common/autotest_common.sh@76 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:27:26.479 22:35:25 -- common/autotest_common.sh@78 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:27:26.479 22:35:25 -- common/autotest_common.sh@80 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:27:26.479 22:35:25 -- common/autotest_common.sh@82 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:27:26.479 22:35:25 -- common/autotest_common.sh@84 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:27:26.479 22:35:25 -- common/autotest_common.sh@86 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:27:26.479 22:35:25 -- common/autotest_common.sh@88 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:27:26.479 22:35:25 -- common/autotest_common.sh@90 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:26.479 22:35:25 -- common/autotest_common.sh@92 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:27:26.479 22:35:25 -- common/autotest_common.sh@94 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:27:26.479 22:35:25 -- common/autotest_common.sh@96 -- # : rdma 00:27:26.479 22:35:25 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:26.479 22:35:25 -- common/autotest_common.sh@98 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:27:26.479 22:35:25 -- common/autotest_common.sh@100 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:27:26.479 22:35:25 -- common/autotest_common.sh@102 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:27:26.479 22:35:25 -- common/autotest_common.sh@104 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:27:26.479 22:35:25 -- common/autotest_common.sh@106 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:27:26.479 22:35:25 -- common/autotest_common.sh@108 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:27:26.479 22:35:25 -- common/autotest_common.sh@110 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:27:26.479 22:35:25 -- common/autotest_common.sh@112 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:26.479 22:35:25 -- common/autotest_common.sh@114 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:27:26.479 22:35:25 -- common/autotest_common.sh@116 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:27:26.479 22:35:25 -- common/autotest_common.sh@118 -- # : 00:27:26.479 22:35:25 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:26.479 22:35:25 -- common/autotest_common.sh@120 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:27:26.479 22:35:25 -- common/autotest_common.sh@122 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:27:26.479 22:35:25 -- common/autotest_common.sh@124 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:27:26.479 22:35:25 -- common/autotest_common.sh@126 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:27:26.479 22:35:25 -- common/autotest_common.sh@128 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:27:26.479 22:35:25 -- common/autotest_common.sh@130 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:27:26.479 22:35:25 -- common/autotest_common.sh@132 -- # : 00:27:26.479 22:35:25 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:27:26.479 22:35:25 -- common/autotest_common.sh@134 -- # : true 00:27:26.479 22:35:25 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:27:26.479 22:35:25 -- common/autotest_common.sh@136 -- # : 1 00:27:26.479 22:35:25 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:27:26.479 22:35:25 -- common/autotest_common.sh@138 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:27:26.479 22:35:25 -- common/autotest_common.sh@140 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:27:26.479 22:35:25 -- common/autotest_common.sh@142 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:27:26.479 22:35:25 -- common/autotest_common.sh@144 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:27:26.479 22:35:25 -- common/autotest_common.sh@146 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:27:26.479 22:35:25 -- common/autotest_common.sh@148 -- # : 00:27:26.479 22:35:25 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:27:26.479 22:35:25 -- common/autotest_common.sh@150 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:27:26.479 22:35:25 -- common/autotest_common.sh@152 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:27:26.479 22:35:25 -- common/autotest_common.sh@154 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:27:26.479 22:35:25 -- common/autotest_common.sh@156 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:27:26.479 22:35:25 -- common/autotest_common.sh@158 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:27:26.479 22:35:25 -- common/autotest_common.sh@160 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:27:26.479 22:35:25 -- common/autotest_common.sh@163 -- # : 00:27:26.479 22:35:25 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:27:26.479 22:35:25 -- common/autotest_common.sh@165 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:27:26.479 22:35:25 -- common/autotest_common.sh@167 -- # : 0 00:27:26.479 22:35:25 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:26.479 22:35:25 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:26.479 22:35:25 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:26.479 22:35:25 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:26.480 22:35:25 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:26.480 22:35:25 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:26.480 22:35:25 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:26.480 22:35:25 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:27:26.480 22:35:25 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:26.480 22:35:25 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:26.480 22:35:25 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:26.480 22:35:25 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:27:26.480 22:35:25 -- common/autotest_common.sh@196 -- # cat 00:27:26.480 22:35:25 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:27:26.480 22:35:25 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:26.480 22:35:25 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:26.480 22:35:25 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:26.480 22:35:25 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:26.480 22:35:25 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:27:26.480 22:35:25 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:27:26.480 22:35:25 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:26.480 22:35:25 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:26.480 22:35:25 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:26.480 22:35:25 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:26.480 22:35:25 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:27:26.480 22:35:25 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:27:26.480 22:35:25 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:26.480 22:35:25 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:26.480 22:35:25 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:26.480 22:35:25 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:26.480 22:35:25 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:26.480 22:35:25 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:26.480 22:35:25 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:27:26.480 22:35:25 -- common/autotest_common.sh@249 -- # export valgrind= 00:27:26.480 22:35:25 -- common/autotest_common.sh@249 -- # valgrind= 00:27:26.480 22:35:25 -- common/autotest_common.sh@255 -- # uname -s 00:27:26.480 22:35:25 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:27:26.480 22:35:25 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:27:26.480 22:35:25 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:27:26.480 22:35:25 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:27:26.480 22:35:25 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:26.480 22:35:25 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:26.480 22:35:25 -- common/autotest_common.sh@265 -- # MAKE=make 00:27:26.480 22:35:25 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:27:26.480 22:35:25 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:27:26.480 22:35:25 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:27:26.480 22:35:25 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:26.480 22:35:25 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:27:26.480 22:35:25 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:27:26.480 22:35:25 -- common/autotest_common.sh@309 -- # [[ -z 136956 ]] 00:27:26.480 22:35:25 -- common/autotest_common.sh@309 -- # kill -0 136956 00:27:26.480 22:35:25 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:27:26.480 22:35:25 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:27:26.480 22:35:25 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:27:26.480 22:35:25 -- common/autotest_common.sh@322 -- # local mount target_dir 00:27:26.480 22:35:25 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:27:26.480 22:35:25 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:27:26.480 22:35:25 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:27:26.480 22:35:25 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:27:26.480 22:35:25 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.sFSorO 00:27:26.480 22:35:25 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:26.480 22:35:25 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:27:26.480 22:35:25 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:27:26.480 22:35:25 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.sFSorO/tests/interrupt /tmp/spdk.sFSorO 00:27:26.480 22:35:25 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@318 -- # df -T 00:27:26.480 22:35:25 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224465920 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224465920 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=10616180736 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=9983836160 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269972480 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272565248 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=97899524096 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:27:26.480 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=1803255808 00:27:26.480 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:27:26.480 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:26.480 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:26.481 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:27:26.481 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:27:26.481 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.481 22:35:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:27:26.481 22:35:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:26.481 22:35:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:26.481 22:35:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:27:26.481 22:35:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:27:26.481 22:35:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:26.481 22:35:25 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:27:26.481 * Looking for test storage... 00:27:26.481 22:35:25 -- common/autotest_common.sh@359 -- # local target_space new_size 00:27:26.481 22:35:25 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:27:26.481 22:35:25 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.481 22:35:25 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:26.481 22:35:25 -- common/autotest_common.sh@363 -- # mount=/ 00:27:26.481 22:35:25 -- common/autotest_common.sh@365 -- # target_space=10616180736 00:27:26.481 22:35:25 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:27:26.481 22:35:25 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:27:26.481 22:35:25 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:27:26.481 22:35:25 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:27:26.481 22:35:25 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:27:26.481 22:35:25 -- common/autotest_common.sh@372 -- # new_size=12198428672 00:27:26.481 22:35:25 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:26.481 22:35:25 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.481 22:35:25 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.481 22:35:25 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:26.481 22:35:25 -- common/autotest_common.sh@380 -- # return 0 00:27:26.481 22:35:25 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:27:26.481 22:35:25 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:27:26.481 22:35:25 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:26.481 22:35:25 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:26.481 22:35:25 -- common/autotest_common.sh@1672 -- # true 00:27:26.481 22:35:25 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:27:26.481 22:35:25 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:26.481 22:35:25 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:26.481 22:35:25 -- common/autotest_common.sh@27 -- # exec 00:27:26.481 22:35:25 -- common/autotest_common.sh@29 -- # exec 00:27:26.481 22:35:25 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:26.481 22:35:25 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:26.481 22:35:25 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:26.481 22:35:25 -- common/autotest_common.sh@18 -- # set -x 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:26.481 22:35:25 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:26.481 22:35:25 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:26.481 22:35:25 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=137005 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 137005 /var/tmp/spdk.sock 00:27:26.481 22:35:25 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:26.481 22:35:25 -- common/autotest_common.sh@819 -- # '[' -z 137005 ']' 00:27:26.481 22:35:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.481 22:35:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:26.481 22:35:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.481 22:35:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:26.481 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:27:26.481 [2024-07-14 22:35:25.536826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:26.481 [2024-07-14 22:35:25.537789] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137005 ] 00:27:26.740 [2024-07-14 22:35:25.713857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.999 [2024-07-14 22:35:25.913325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.999 [2024-07-14 22:35:25.913449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.999 [2024-07-14 22:35:25.913448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.258 [2024-07-14 22:35:26.176235] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:27.516 22:35:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:27.516 22:35:26 -- common/autotest_common.sh@852 -- # return 0 00:27:27.516 22:35:26 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:27:27.516 22:35:26 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:27:27.516 22:35:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.516 22:35:26 -- common/autotest_common.sh@10 -- # set +x 00:27:27.516 22:35:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.516 22:35:26 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:27:27.516 "name": "app_thread", 00:27:27.516 "id": 1, 00:27:27.516 "active_pollers": [], 00:27:27.516 "timed_pollers": [ 00:27:27.516 { 00:27:27.516 "name": "rpc_subsystem_poll", 00:27:27.516 "id": 1, 00:27:27.516 "state": "waiting", 00:27:27.516 "run_count": 0, 00:27:27.516 "busy_count": 0, 00:27:27.516 "period_ticks": 8800000 00:27:27.516 } 00:27:27.516 ], 00:27:27.516 "paused_pollers": [] 00:27:27.516 }' 00:27:27.516 22:35:26 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:27:27.775 22:35:26 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:27:27.775 22:35:26 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:27:27.775 22:35:26 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:27:27.775 22:35:26 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:27:27.775 22:35:26 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:27:27.775 22:35:26 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:27.775 22:35:26 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:27.775 22:35:26 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:27.775 5000+0 records in 00:27:27.775 5000+0 records out 00:27:27.775 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0228275 s, 449 MB/s 00:27:27.775 22:35:26 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:28.033 AIO0 00:27:28.033 22:35:26 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:28.292 22:35:27 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:27:28.550 22:35:27 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:27:28.550 22:35:27 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:27:28.550 22:35:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.550 22:35:27 -- common/autotest_common.sh@10 -- # set +x 00:27:28.550 22:35:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.550 22:35:27 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:27:28.550 "name": "app_thread", 00:27:28.550 "id": 1, 00:27:28.550 "active_pollers": [], 00:27:28.550 "timed_pollers": [ 00:27:28.550 { 00:27:28.550 "name": "rpc_subsystem_poll", 00:27:28.550 "id": 1, 00:27:28.550 "state": "waiting", 00:27:28.551 "run_count": 0, 00:27:28.551 "busy_count": 0, 00:27:28.551 "period_ticks": 8800000 00:27:28.551 } 00:27:28.551 ], 00:27:28.551 "paused_pollers": [] 00:27:28.551 }' 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:28.551 22:35:27 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 137005 00:27:28.551 22:35:27 -- common/autotest_common.sh@926 -- # '[' -z 137005 ']' 00:27:28.551 22:35:27 -- common/autotest_common.sh@930 -- # kill -0 137005 00:27:28.551 22:35:27 -- common/autotest_common.sh@931 -- # uname 00:27:28.551 22:35:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:28.551 22:35:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137005 00:27:28.551 22:35:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:28.551 killing process with pid 137005 00:27:28.551 22:35:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:28.551 22:35:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137005' 00:27:28.551 22:35:27 -- common/autotest_common.sh@945 -- # kill 137005 00:27:28.551 22:35:27 -- common/autotest_common.sh@950 -- # wait 137005 00:27:29.927 22:35:28 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:27:29.927 22:35:28 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:29.927 ************************************ 00:27:29.927 END TEST reap_unregistered_poller 00:27:29.927 ************************************ 00:27:29.927 00:27:29.927 real 0m3.309s 00:27:29.927 user 0m2.796s 00:27:29.927 sys 0m0.478s 00:27:29.927 22:35:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.927 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:27:29.927 22:35:28 -- spdk/autotest.sh@204 -- # uname -s 00:27:29.927 22:35:28 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:27:29.927 22:35:28 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:27:29.927 22:35:28 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:27:29.927 22:35:28 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:27:29.927 22:35:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:29.927 22:35:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.927 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:27:29.927 ************************************ 00:27:29.927 START TEST spdk_dd 00:27:29.927 ************************************ 00:27:29.927 22:35:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:27:29.927 * Looking for test storage... 00:27:29.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:29.927 22:35:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:29.927 22:35:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.927 22:35:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.927 22:35:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.927 22:35:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.927 22:35:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.927 22:35:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.927 22:35:28 -- paths/export.sh@5 -- # export PATH 00:27:29.927 22:35:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.927 22:35:28 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:29.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:30.185 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:31.120 22:35:29 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:27:31.120 22:35:29 -- dd/dd.sh@11 -- # nvme_in_userspace 00:27:31.120 22:35:29 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:31.120 22:35:29 -- scripts/common.sh@312 -- # local nvmes 00:27:31.120 22:35:29 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:31.120 22:35:29 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:31.120 22:35:29 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:31.120 22:35:29 -- scripts/common.sh@297 -- # local bdf= 00:27:31.120 22:35:29 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:31.120 22:35:29 -- scripts/common.sh@232 -- # local class 00:27:31.120 22:35:29 -- scripts/common.sh@233 -- # local subclass 00:27:31.120 22:35:29 -- scripts/common.sh@234 -- # local progif 00:27:31.120 22:35:30 -- scripts/common.sh@235 -- # printf %02x 1 00:27:31.120 22:35:30 -- scripts/common.sh@235 -- # class=01 00:27:31.120 22:35:30 -- scripts/common.sh@236 -- # printf %02x 8 00:27:31.120 22:35:30 -- scripts/common.sh@236 -- # subclass=08 00:27:31.120 22:35:30 -- scripts/common.sh@237 -- # printf %02x 2 00:27:31.120 22:35:30 -- scripts/common.sh@237 -- # progif=02 00:27:31.120 22:35:30 -- scripts/common.sh@239 -- # hash lspci 00:27:31.120 22:35:30 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:31.120 22:35:30 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:31.120 22:35:30 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:31.120 22:35:30 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:31.120 22:35:30 -- scripts/common.sh@244 -- # tr -d '"' 00:27:31.120 22:35:30 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:31.120 22:35:30 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:31.120 22:35:30 -- scripts/common.sh@15 -- # local i 00:27:31.120 22:35:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:31.120 22:35:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:31.120 22:35:30 -- scripts/common.sh@24 -- # return 0 00:27:31.120 22:35:30 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:31.120 22:35:30 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:31.120 22:35:30 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:31.120 22:35:30 -- scripts/common.sh@322 -- # uname -s 00:27:31.120 22:35:30 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:31.120 22:35:30 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:31.120 22:35:30 -- scripts/common.sh@327 -- # (( 1 )) 00:27:31.120 22:35:30 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:27:31.120 22:35:30 -- dd/dd.sh@13 -- # check_liburing 00:27:31.120 22:35:30 -- dd/common.sh@139 -- # local lib so 00:27:31.120 22:35:30 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:27:31.120 22:35:30 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:27:31.120 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.120 22:35:30 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:27:31.121 22:35:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:31.121 22:35:30 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:27:31.121 22:35:30 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:27:31.121 22:35:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:31.121 22:35:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.121 22:35:30 -- common/autotest_common.sh@10 -- # set +x 00:27:31.121 ************************************ 00:27:31.121 START TEST spdk_dd_basic_rw 00:27:31.121 ************************************ 00:27:31.121 22:35:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:27:31.121 * Looking for test storage... 00:27:31.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:31.121 22:35:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:31.121 22:35:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.121 22:35:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.121 22:35:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.121 22:35:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.121 22:35:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.121 22:35:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.121 22:35:30 -- paths/export.sh@5 -- # export PATH 00:27:31.121 22:35:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.121 22:35:30 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:27:31.121 22:35:30 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:27:31.121 22:35:30 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:27:31.121 22:35:30 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:27:31.121 22:35:30 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:27:31.121 22:35:30 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:27:31.121 22:35:30 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:27:31.121 22:35:30 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:31.121 22:35:30 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:31.121 22:35:30 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:27:31.121 22:35:30 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:27:31.121 22:35:30 -- dd/common.sh@126 -- # mapfile -t id 00:27:31.121 22:35:30 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:27:31.381 22:35:30 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2288 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:27:31.381 22:35:30 -- dd/common.sh@130 -- # lbaf=04 00:27:31.381 22:35:30 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2288 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:27:31.381 22:35:30 -- dd/common.sh@132 -- # lbaf=4096 00:27:31.381 22:35:30 -- dd/common.sh@134 -- # echo 4096 00:27:31.381 22:35:30 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:27:31.381 22:35:30 -- dd/basic_rw.sh@96 -- # : 00:27:31.381 22:35:30 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:31.381 22:35:30 -- dd/basic_rw.sh@96 -- # gen_conf 00:27:31.381 22:35:30 -- dd/common.sh@31 -- # xtrace_disable 00:27:31.381 22:35:30 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:27:31.381 22:35:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.381 22:35:30 -- common/autotest_common.sh@10 -- # set +x 00:27:31.381 22:35:30 -- common/autotest_common.sh@10 -- # set +x 00:27:31.381 ************************************ 00:27:31.381 START TEST dd_bs_lt_native_bs 00:27:31.381 ************************************ 00:27:31.381 22:35:30 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:31.381 22:35:30 -- common/autotest_common.sh@640 -- # local es=0 00:27:31.381 22:35:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:31.381 22:35:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.381 22:35:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:31.381 22:35:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.381 22:35:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:31.381 22:35:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.381 22:35:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:31.381 22:35:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.381 22:35:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:31.382 22:35:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:31.640 { 00:27:31.640 "subsystems": [ 00:27:31.640 { 00:27:31.640 "subsystem": "bdev", 00:27:31.640 "config": [ 00:27:31.640 { 00:27:31.640 "params": { 00:27:31.640 "trtype": "pcie", 00:27:31.640 "traddr": "0000:00:06.0", 00:27:31.640 "name": "Nvme0" 00:27:31.640 }, 00:27:31.640 "method": "bdev_nvme_attach_controller" 00:27:31.640 }, 00:27:31.640 { 00:27:31.640 "method": "bdev_wait_for_examine" 00:27:31.640 } 00:27:31.640 ] 00:27:31.640 } 00:27:31.640 ] 00:27:31.640 } 00:27:31.640 [2024-07-14 22:35:30.526260] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:31.640 [2024-07-14 22:35:30.526856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137322 ] 00:27:31.640 [2024-07-14 22:35:30.702226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.898 [2024-07-14 22:35:30.935445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.464 [2024-07-14 22:35:31.248704] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:27:32.464 [2024-07-14 22:35:31.249021] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:33.030 [2024-07-14 22:35:31.822183] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:33.288 ************************************ 00:27:33.288 END TEST dd_bs_lt_native_bs 00:27:33.288 ************************************ 00:27:33.288 22:35:32 -- common/autotest_common.sh@643 -- # es=234 00:27:33.288 22:35:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:33.288 22:35:32 -- common/autotest_common.sh@652 -- # es=106 00:27:33.288 22:35:32 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:33.288 22:35:32 -- common/autotest_common.sh@660 -- # es=1 00:27:33.288 22:35:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:33.288 00:27:33.288 real 0m1.731s 00:27:33.288 user 0m1.468s 00:27:33.288 sys 0m0.250s 00:27:33.288 22:35:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.288 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:33.288 22:35:32 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:27:33.288 22:35:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:33.288 22:35:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.288 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:33.288 ************************************ 00:27:33.288 START TEST dd_rw 00:27:33.288 ************************************ 00:27:33.288 22:35:32 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:27:33.288 22:35:32 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:27:33.288 22:35:32 -- dd/basic_rw.sh@12 -- # local count size 00:27:33.288 22:35:32 -- dd/basic_rw.sh@13 -- # local qds bss 00:27:33.288 22:35:32 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:27:33.288 22:35:32 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:33.289 22:35:32 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:33.289 22:35:32 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:33.289 22:35:32 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:33.289 22:35:32 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:33.289 22:35:32 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:33.289 22:35:32 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:33.289 22:35:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:33.289 22:35:32 -- dd/basic_rw.sh@23 -- # count=15 00:27:33.289 22:35:32 -- dd/basic_rw.sh@24 -- # count=15 00:27:33.289 22:35:32 -- dd/basic_rw.sh@25 -- # size=61440 00:27:33.289 22:35:32 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:33.289 22:35:32 -- dd/common.sh@98 -- # xtrace_disable 00:27:33.289 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:33.854 22:35:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:27:33.854 22:35:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:33.854 22:35:32 -- dd/common.sh@31 -- # xtrace_disable 00:27:33.854 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:33.854 [2024-07-14 22:35:32.893004] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:33.854 [2024-07-14 22:35:32.893414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137382 ] 00:27:33.854 { 00:27:33.854 "subsystems": [ 00:27:33.854 { 00:27:33.854 "subsystem": "bdev", 00:27:33.854 "config": [ 00:27:33.854 { 00:27:33.854 "params": { 00:27:33.854 "trtype": "pcie", 00:27:33.854 "traddr": "0000:00:06.0", 00:27:33.854 "name": "Nvme0" 00:27:33.854 }, 00:27:33.854 "method": "bdev_nvme_attach_controller" 00:27:33.854 }, 00:27:33.854 { 00:27:33.854 "method": "bdev_wait_for_examine" 00:27:33.854 } 00:27:33.854 ] 00:27:33.854 } 00:27:33.854 ] 00:27:33.854 } 00:27:34.112 [2024-07-14 22:35:33.058574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.369 [2024-07-14 22:35:33.227960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.561  Copying: 60/60 [kB] (average 19 MBps) 00:27:35.561 00:27:35.561 22:35:34 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:27:35.561 22:35:34 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:35.561 22:35:34 -- dd/common.sh@31 -- # xtrace_disable 00:27:35.562 22:35:34 -- common/autotest_common.sh@10 -- # set +x 00:27:35.562 { 00:27:35.562 "subsystems": [ 00:27:35.562 { 00:27:35.562 "subsystem": "bdev", 00:27:35.562 "config": [ 00:27:35.562 { 00:27:35.562 "params": { 00:27:35.562 "trtype": "pcie", 00:27:35.562 "traddr": "0000:00:06.0", 00:27:35.562 "name": "Nvme0" 00:27:35.562 }, 00:27:35.562 "method": "bdev_nvme_attach_controller" 00:27:35.562 }, 00:27:35.562 { 00:27:35.562 "method": "bdev_wait_for_examine" 00:27:35.562 } 00:27:35.562 ] 00:27:35.562 } 00:27:35.562 ] 00:27:35.562 } 00:27:35.562 [2024-07-14 22:35:34.485464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:35.562 [2024-07-14 22:35:34.485856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137405 ] 00:27:35.820 [2024-07-14 22:35:34.650524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.820 [2024-07-14 22:35:34.810443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.012  Copying: 60/60 [kB] (average 19 MBps) 00:27:37.012 00:27:37.271 22:35:36 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:37.271 22:35:36 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:37.271 22:35:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:37.271 22:35:36 -- dd/common.sh@11 -- # local nvme_ref= 00:27:37.271 22:35:36 -- dd/common.sh@12 -- # local size=61440 00:27:37.271 22:35:36 -- dd/common.sh@14 -- # local bs=1048576 00:27:37.271 22:35:36 -- dd/common.sh@15 -- # local count=1 00:27:37.271 22:35:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:37.271 22:35:36 -- dd/common.sh@18 -- # gen_conf 00:27:37.271 22:35:36 -- dd/common.sh@31 -- # xtrace_disable 00:27:37.271 22:35:36 -- common/autotest_common.sh@10 -- # set +x 00:27:37.271 [2024-07-14 22:35:36.154904] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:37.271 [2024-07-14 22:35:36.155366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137437 ] 00:27:37.271 { 00:27:37.271 "subsystems": [ 00:27:37.271 { 00:27:37.271 "subsystem": "bdev", 00:27:37.271 "config": [ 00:27:37.271 { 00:27:37.271 "params": { 00:27:37.271 "trtype": "pcie", 00:27:37.271 "traddr": "0000:00:06.0", 00:27:37.271 "name": "Nvme0" 00:27:37.271 }, 00:27:37.271 "method": "bdev_nvme_attach_controller" 00:27:37.271 }, 00:27:37.271 { 00:27:37.271 "method": "bdev_wait_for_examine" 00:27:37.271 } 00:27:37.271 ] 00:27:37.271 } 00:27:37.271 ] 00:27:37.271 } 00:27:37.271 [2024-07-14 22:35:36.322171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.529 [2024-07-14 22:35:36.475783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.721  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:38.721 00:27:38.721 22:35:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:38.721 22:35:37 -- dd/basic_rw.sh@23 -- # count=15 00:27:38.721 22:35:37 -- dd/basic_rw.sh@24 -- # count=15 00:27:38.721 22:35:37 -- dd/basic_rw.sh@25 -- # size=61440 00:27:38.721 22:35:37 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:38.721 22:35:37 -- dd/common.sh@98 -- # xtrace_disable 00:27:38.721 22:35:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.287 22:35:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:27:39.288 22:35:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:39.288 22:35:38 -- dd/common.sh@31 -- # xtrace_disable 00:27:39.288 22:35:38 -- common/autotest_common.sh@10 -- # set +x 00:27:39.288 [2024-07-14 22:35:38.287179] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:39.288 [2024-07-14 22:35:38.287531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137468 ] 00:27:39.288 { 00:27:39.288 "subsystems": [ 00:27:39.288 { 00:27:39.288 "subsystem": "bdev", 00:27:39.288 "config": [ 00:27:39.288 { 00:27:39.288 "params": { 00:27:39.288 "trtype": "pcie", 00:27:39.288 "traddr": "0000:00:06.0", 00:27:39.288 "name": "Nvme0" 00:27:39.288 }, 00:27:39.288 "method": "bdev_nvme_attach_controller" 00:27:39.288 }, 00:27:39.288 { 00:27:39.288 "method": "bdev_wait_for_examine" 00:27:39.288 } 00:27:39.288 ] 00:27:39.288 } 00:27:39.288 ] 00:27:39.288 } 00:27:39.546 [2024-07-14 22:35:38.454168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.546 [2024-07-14 22:35:38.607842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.112  Copying: 60/60 [kB] (average 58 MBps) 00:27:41.112 00:27:41.112 22:35:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:27:41.112 22:35:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:41.112 22:35:39 -- dd/common.sh@31 -- # xtrace_disable 00:27:41.112 22:35:39 -- common/autotest_common.sh@10 -- # set +x 00:27:41.112 { 00:27:41.112 "subsystems": [ 00:27:41.112 { 00:27:41.112 "subsystem": "bdev", 00:27:41.112 "config": [ 00:27:41.112 { 00:27:41.112 "params": { 00:27:41.112 "trtype": "pcie", 00:27:41.112 "traddr": "0000:00:06.0", 00:27:41.112 "name": "Nvme0" 00:27:41.112 }, 00:27:41.112 "method": "bdev_nvme_attach_controller" 00:27:41.112 }, 00:27:41.112 { 00:27:41.112 "method": "bdev_wait_for_examine" 00:27:41.112 } 00:27:41.112 ] 00:27:41.112 } 00:27:41.112 ] 00:27:41.112 } 00:27:41.112 [2024-07-14 22:35:39.986261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:41.112 [2024-07-14 22:35:39.986586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137508 ] 00:27:41.112 [2024-07-14 22:35:40.154273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.370 [2024-07-14 22:35:40.319662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.871  Copying: 60/60 [kB] (average 58 MBps) 00:27:42.871 00:27:42.871 22:35:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:42.871 22:35:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:42.871 22:35:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:42.871 22:35:41 -- dd/common.sh@11 -- # local nvme_ref= 00:27:42.871 22:35:41 -- dd/common.sh@12 -- # local size=61440 00:27:42.871 22:35:41 -- dd/common.sh@14 -- # local bs=1048576 00:27:42.871 22:35:41 -- dd/common.sh@15 -- # local count=1 00:27:42.871 22:35:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:42.871 22:35:41 -- dd/common.sh@18 -- # gen_conf 00:27:42.871 22:35:41 -- dd/common.sh@31 -- # xtrace_disable 00:27:42.871 22:35:41 -- common/autotest_common.sh@10 -- # set +x 00:27:43.130 [2024-07-14 22:35:41.964676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:43.130 { 00:27:43.130 "subsystems": [ 00:27:43.130 { 00:27:43.130 "subsystem": "bdev", 00:27:43.130 "config": [ 00:27:43.130 { 00:27:43.130 "params": { 00:27:43.130 "trtype": "pcie", 00:27:43.130 "traddr": "0000:00:06.0", 00:27:43.130 "name": "Nvme0" 00:27:43.130 }, 00:27:43.130 "method": "bdev_nvme_attach_controller" 00:27:43.130 }, 00:27:43.130 { 00:27:43.130 "method": "bdev_wait_for_examine" 00:27:43.130 } 00:27:43.130 ] 00:27:43.130 } 00:27:43.130 ] 00:27:43.130 } 00:27:43.130 [2024-07-14 22:35:41.965102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137547 ] 00:27:43.130 [2024-07-14 22:35:42.133340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.387 [2024-07-14 22:35:42.358288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.327  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:45.327 00:27:45.327 22:35:44 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:45.327 22:35:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:45.327 22:35:44 -- dd/basic_rw.sh@23 -- # count=7 00:27:45.327 22:35:44 -- dd/basic_rw.sh@24 -- # count=7 00:27:45.327 22:35:44 -- dd/basic_rw.sh@25 -- # size=57344 00:27:45.327 22:35:44 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:45.327 22:35:44 -- dd/common.sh@98 -- # xtrace_disable 00:27:45.327 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:27:45.893 22:35:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:27:45.893 22:35:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:45.893 22:35:44 -- dd/common.sh@31 -- # xtrace_disable 00:27:45.893 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:27:45.893 [2024-07-14 22:35:44.744392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:45.893 [2024-07-14 22:35:44.744705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137579 ] 00:27:45.893 { 00:27:45.893 "subsystems": [ 00:27:45.893 { 00:27:45.893 "subsystem": "bdev", 00:27:45.893 "config": [ 00:27:45.893 { 00:27:45.893 "params": { 00:27:45.893 "trtype": "pcie", 00:27:45.893 "traddr": "0000:00:06.0", 00:27:45.893 "name": "Nvme0" 00:27:45.893 }, 00:27:45.893 "method": "bdev_nvme_attach_controller" 00:27:45.893 }, 00:27:45.893 { 00:27:45.893 "method": "bdev_wait_for_examine" 00:27:45.893 } 00:27:45.893 ] 00:27:45.893 } 00:27:45.893 ] 00:27:45.893 } 00:27:45.893 [2024-07-14 22:35:44.911430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.151 [2024-07-14 22:35:45.141811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.653  Copying: 56/56 [kB] (average 27 MBps) 00:27:47.653 00:27:47.653 22:35:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:27:47.653 22:35:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:47.653 22:35:46 -- dd/common.sh@31 -- # xtrace_disable 00:27:47.653 22:35:46 -- common/autotest_common.sh@10 -- # set +x 00:27:47.653 { 00:27:47.653 "subsystems": [ 00:27:47.653 { 00:27:47.653 "subsystem": "bdev", 00:27:47.653 "config": [ 00:27:47.653 { 00:27:47.653 "params": { 00:27:47.653 "trtype": "pcie", 00:27:47.653 "traddr": "0000:00:06.0", 00:27:47.653 "name": "Nvme0" 00:27:47.653 }, 00:27:47.653 "method": "bdev_nvme_attach_controller" 00:27:47.653 }, 00:27:47.653 { 00:27:47.653 "method": "bdev_wait_for_examine" 00:27:47.653 } 00:27:47.653 ] 00:27:47.653 } 00:27:47.653 ] 00:27:47.653 } 00:27:47.653 [2024-07-14 22:35:46.716547] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:47.653 [2024-07-14 22:35:46.716883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137607 ] 00:27:47.912 [2024-07-14 22:35:46.884167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.170 [2024-07-14 22:35:47.125584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.110  Copying: 56/56 [kB] (average 54 MBps) 00:27:50.110 00:27:50.110 22:35:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:50.110 22:35:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:50.110 22:35:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:50.110 22:35:48 -- dd/common.sh@11 -- # local nvme_ref= 00:27:50.110 22:35:48 -- dd/common.sh@12 -- # local size=57344 00:27:50.110 22:35:48 -- dd/common.sh@14 -- # local bs=1048576 00:27:50.110 22:35:48 -- dd/common.sh@15 -- # local count=1 00:27:50.110 22:35:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:50.110 22:35:48 -- dd/common.sh@18 -- # gen_conf 00:27:50.110 22:35:48 -- dd/common.sh@31 -- # xtrace_disable 00:27:50.110 22:35:48 -- common/autotest_common.sh@10 -- # set +x 00:27:50.110 { 00:27:50.110 "subsystems": [ 00:27:50.110 { 00:27:50.110 "subsystem": "bdev", 00:27:50.110 "config": [ 00:27:50.110 { 00:27:50.110 "params": { 00:27:50.110 "trtype": "pcie", 00:27:50.110 "traddr": "0000:00:06.0", 00:27:50.110 "name": "Nvme0" 00:27:50.110 }, 00:27:50.110 "method": "bdev_nvme_attach_controller" 00:27:50.110 }, 00:27:50.110 { 00:27:50.110 "method": "bdev_wait_for_examine" 00:27:50.110 } 00:27:50.110 ] 00:27:50.110 } 00:27:50.110 ] 00:27:50.110 } 00:27:50.110 [2024-07-14 22:35:48.956645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:50.110 [2024-07-14 22:35:48.957584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137647 ] 00:27:50.110 [2024-07-14 22:35:49.128312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.368 [2024-07-14 22:35:49.314077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.561  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:51.562 00:27:51.562 22:35:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:51.562 22:35:50 -- dd/basic_rw.sh@23 -- # count=7 00:27:51.562 22:35:50 -- dd/basic_rw.sh@24 -- # count=7 00:27:51.562 22:35:50 -- dd/basic_rw.sh@25 -- # size=57344 00:27:51.562 22:35:50 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:51.562 22:35:50 -- dd/common.sh@98 -- # xtrace_disable 00:27:51.562 22:35:50 -- common/autotest_common.sh@10 -- # set +x 00:27:52.497 22:35:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:27:52.497 22:35:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:52.497 22:35:51 -- dd/common.sh@31 -- # xtrace_disable 00:27:52.497 22:35:51 -- common/autotest_common.sh@10 -- # set +x 00:27:52.497 [2024-07-14 22:35:51.278617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:52.497 [2024-07-14 22:35:51.279029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137695 ] 00:27:52.497 { 00:27:52.498 "subsystems": [ 00:27:52.498 { 00:27:52.498 "subsystem": "bdev", 00:27:52.498 "config": [ 00:27:52.498 { 00:27:52.498 "params": { 00:27:52.498 "trtype": "pcie", 00:27:52.498 "traddr": "0000:00:06.0", 00:27:52.498 "name": "Nvme0" 00:27:52.498 }, 00:27:52.498 "method": "bdev_nvme_attach_controller" 00:27:52.498 }, 00:27:52.498 { 00:27:52.498 "method": "bdev_wait_for_examine" 00:27:52.498 } 00:27:52.498 ] 00:27:52.498 } 00:27:52.498 ] 00:27:52.498 } 00:27:52.498 [2024-07-14 22:35:51.446636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.757 [2024-07-14 22:35:51.616934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.949  Copying: 56/56 [kB] (average 54 MBps) 00:27:53.949 00:27:53.949 22:35:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:53.949 22:35:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:27:53.949 22:35:52 -- dd/common.sh@31 -- # xtrace_disable 00:27:53.949 22:35:52 -- common/autotest_common.sh@10 -- # set +x 00:27:53.949 { 00:27:53.949 "subsystems": [ 00:27:53.949 { 00:27:53.949 "subsystem": "bdev", 00:27:53.949 "config": [ 00:27:53.949 { 00:27:53.949 "params": { 00:27:53.949 "trtype": "pcie", 00:27:53.949 "traddr": "0000:00:06.0", 00:27:53.949 "name": "Nvme0" 00:27:53.949 }, 00:27:53.949 "method": "bdev_nvme_attach_controller" 00:27:53.949 }, 00:27:53.949 { 00:27:53.949 "method": "bdev_wait_for_examine" 00:27:53.949 } 00:27:53.949 ] 00:27:53.949 } 00:27:53.949 ] 00:27:53.949 } 00:27:53.949 [2024-07-14 22:35:52.942276] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:53.949 [2024-07-14 22:35:52.942637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137715 ] 00:27:54.209 [2024-07-14 22:35:53.109576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.209 [2024-07-14 22:35:53.270655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.705  Copying: 56/56 [kB] (average 54 MBps) 00:27:55.705 00:27:55.705 22:35:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:55.705 22:35:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:55.705 22:35:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:55.705 22:35:54 -- dd/common.sh@11 -- # local nvme_ref= 00:27:55.705 22:35:54 -- dd/common.sh@12 -- # local size=57344 00:27:55.705 22:35:54 -- dd/common.sh@14 -- # local bs=1048576 00:27:55.705 22:35:54 -- dd/common.sh@15 -- # local count=1 00:27:55.705 22:35:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:55.705 22:35:54 -- dd/common.sh@18 -- # gen_conf 00:27:55.705 22:35:54 -- dd/common.sh@31 -- # xtrace_disable 00:27:55.705 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:27:55.705 [2024-07-14 22:35:54.502440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:55.705 [2024-07-14 22:35:54.502787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137743 ] 00:27:55.705 { 00:27:55.705 "subsystems": [ 00:27:55.705 { 00:27:55.705 "subsystem": "bdev", 00:27:55.705 "config": [ 00:27:55.705 { 00:27:55.705 "params": { 00:27:55.705 "trtype": "pcie", 00:27:55.705 "traddr": "0000:00:06.0", 00:27:55.705 "name": "Nvme0" 00:27:55.705 }, 00:27:55.705 "method": "bdev_nvme_attach_controller" 00:27:55.705 }, 00:27:55.705 { 00:27:55.705 "method": "bdev_wait_for_examine" 00:27:55.705 } 00:27:55.705 ] 00:27:55.705 } 00:27:55.705 ] 00:27:55.705 } 00:27:55.705 [2024-07-14 22:35:54.654191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.964 [2024-07-14 22:35:54.819575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.189  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:57.189 00:27:57.189 22:35:56 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:57.189 22:35:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:57.189 22:35:56 -- dd/basic_rw.sh@23 -- # count=3 00:27:57.189 22:35:56 -- dd/basic_rw.sh@24 -- # count=3 00:27:57.189 22:35:56 -- dd/basic_rw.sh@25 -- # size=49152 00:27:57.189 22:35:56 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:57.189 22:35:56 -- dd/common.sh@98 -- # xtrace_disable 00:27:57.189 22:35:56 -- common/autotest_common.sh@10 -- # set +x 00:27:57.448 22:35:56 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:57.448 22:35:56 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:57.448 22:35:56 -- dd/common.sh@31 -- # xtrace_disable 00:27:57.448 22:35:56 -- common/autotest_common.sh@10 -- # set +x 00:27:57.705 [2024-07-14 22:35:56.571579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:57.705 [2024-07-14 22:35:56.571995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137775 ] 00:27:57.705 { 00:27:57.705 "subsystems": [ 00:27:57.705 { 00:27:57.705 "subsystem": "bdev", 00:27:57.705 "config": [ 00:27:57.705 { 00:27:57.705 "params": { 00:27:57.705 "trtype": "pcie", 00:27:57.705 "traddr": "0000:00:06.0", 00:27:57.705 "name": "Nvme0" 00:27:57.705 }, 00:27:57.705 "method": "bdev_nvme_attach_controller" 00:27:57.705 }, 00:27:57.705 { 00:27:57.705 "method": "bdev_wait_for_examine" 00:27:57.705 } 00:27:57.705 ] 00:27:57.705 } 00:27:57.705 ] 00:27:57.705 } 00:27:57.705 [2024-07-14 22:35:56.738411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.962 [2024-07-14 22:35:56.891388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.155  Copying: 48/48 [kB] (average 46 MBps) 00:27:59.155 00:27:59.155 22:35:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:59.155 22:35:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:59.155 22:35:58 -- dd/common.sh@31 -- # xtrace_disable 00:27:59.155 22:35:58 -- common/autotest_common.sh@10 -- # set +x 00:27:59.155 { 00:27:59.155 "subsystems": [ 00:27:59.155 { 00:27:59.155 "subsystem": "bdev", 00:27:59.155 "config": [ 00:27:59.155 { 00:27:59.155 "params": { 00:27:59.155 "trtype": "pcie", 00:27:59.155 "traddr": "0000:00:06.0", 00:27:59.155 "name": "Nvme0" 00:27:59.155 }, 00:27:59.155 "method": "bdev_nvme_attach_controller" 00:27:59.155 }, 00:27:59.155 { 00:27:59.155 "method": "bdev_wait_for_examine" 00:27:59.155 } 00:27:59.155 ] 00:27:59.155 } 00:27:59.155 ] 00:27:59.155 } 00:27:59.155 [2024-07-14 22:35:58.138094] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:59.155 [2024-07-14 22:35:58.138428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137802 ] 00:27:59.414 [2024-07-14 22:35:58.305313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.414 [2024-07-14 22:35:58.471986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.914  Copying: 48/48 [kB] (average 46 MBps) 00:28:00.914 00:28:00.914 22:35:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:00.914 22:35:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:28:00.914 22:35:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:00.914 22:35:59 -- dd/common.sh@11 -- # local nvme_ref= 00:28:00.914 22:35:59 -- dd/common.sh@12 -- # local size=49152 00:28:00.914 22:35:59 -- dd/common.sh@14 -- # local bs=1048576 00:28:00.914 22:35:59 -- dd/common.sh@15 -- # local count=1 00:28:00.914 22:35:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:00.915 22:35:59 -- dd/common.sh@18 -- # gen_conf 00:28:00.915 22:35:59 -- dd/common.sh@31 -- # xtrace_disable 00:28:00.915 22:35:59 -- common/autotest_common.sh@10 -- # set +x 00:28:00.915 { 00:28:00.915 "subsystems": [ 00:28:00.915 { 00:28:00.915 "subsystem": "bdev", 00:28:00.915 "config": [ 00:28:00.915 { 00:28:00.915 "params": { 00:28:00.915 "trtype": "pcie", 00:28:00.915 "traddr": "0000:00:06.0", 00:28:00.915 "name": "Nvme0" 00:28:00.915 }, 00:28:00.915 "method": "bdev_nvme_attach_controller" 00:28:00.915 }, 00:28:00.915 { 00:28:00.915 "method": "bdev_wait_for_examine" 00:28:00.915 } 00:28:00.915 ] 00:28:00.915 } 00:28:00.915 ] 00:28:00.915 } 00:28:00.915 [2024-07-14 22:35:59.813747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:00.915 [2024-07-14 22:35:59.814160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137827 ] 00:28:00.915 [2024-07-14 22:35:59.987066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.173 [2024-07-14 22:36:00.158348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.366  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:02.366 00:28:02.366 22:36:01 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:02.366 22:36:01 -- dd/basic_rw.sh@23 -- # count=3 00:28:02.366 22:36:01 -- dd/basic_rw.sh@24 -- # count=3 00:28:02.366 22:36:01 -- dd/basic_rw.sh@25 -- # size=49152 00:28:02.366 22:36:01 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:28:02.366 22:36:01 -- dd/common.sh@98 -- # xtrace_disable 00:28:02.366 22:36:01 -- common/autotest_common.sh@10 -- # set +x 00:28:02.932 22:36:01 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:28:02.932 22:36:01 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:02.932 22:36:01 -- dd/common.sh@31 -- # xtrace_disable 00:28:02.932 22:36:01 -- common/autotest_common.sh@10 -- # set +x 00:28:02.932 [2024-07-14 22:36:01.928303] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:02.932 [2024-07-14 22:36:01.928717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137870 ] 00:28:02.932 { 00:28:02.932 "subsystems": [ 00:28:02.932 { 00:28:02.932 "subsystem": "bdev", 00:28:02.932 "config": [ 00:28:02.932 { 00:28:02.932 "params": { 00:28:02.932 "trtype": "pcie", 00:28:02.932 "traddr": "0000:00:06.0", 00:28:02.932 "name": "Nvme0" 00:28:02.932 }, 00:28:02.932 "method": "bdev_nvme_attach_controller" 00:28:02.932 }, 00:28:02.932 { 00:28:02.932 "method": "bdev_wait_for_examine" 00:28:02.932 } 00:28:02.932 ] 00:28:02.932 } 00:28:02.932 ] 00:28:02.932 } 00:28:03.190 [2024-07-14 22:36:02.095877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.449 [2024-07-14 22:36:02.287240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.658  Copying: 48/48 [kB] (average 46 MBps) 00:28:04.658 00:28:04.658 22:36:03 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:28:04.658 22:36:03 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:04.658 22:36:03 -- dd/common.sh@31 -- # xtrace_disable 00:28:04.658 22:36:03 -- common/autotest_common.sh@10 -- # set +x 00:28:04.658 [2024-07-14 22:36:03.613833] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:04.658 [2024-07-14 22:36:03.614157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137901 ] 00:28:04.658 { 00:28:04.658 "subsystems": [ 00:28:04.658 { 00:28:04.658 "subsystem": "bdev", 00:28:04.658 "config": [ 00:28:04.658 { 00:28:04.658 "params": { 00:28:04.658 "trtype": "pcie", 00:28:04.658 "traddr": "0000:00:06.0", 00:28:04.658 "name": "Nvme0" 00:28:04.658 }, 00:28:04.658 "method": "bdev_nvme_attach_controller" 00:28:04.658 }, 00:28:04.658 { 00:28:04.658 "method": "bdev_wait_for_examine" 00:28:04.658 } 00:28:04.658 ] 00:28:04.658 } 00:28:04.658 ] 00:28:04.658 } 00:28:04.916 [2024-07-14 22:36:03.766755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.916 [2024-07-14 22:36:03.921324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.110  Copying: 48/48 [kB] (average 46 MBps) 00:28:06.110 00:28:06.110 22:36:05 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:06.110 22:36:05 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:28:06.110 22:36:05 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:06.110 22:36:05 -- dd/common.sh@11 -- # local nvme_ref= 00:28:06.110 22:36:05 -- dd/common.sh@12 -- # local size=49152 00:28:06.110 22:36:05 -- dd/common.sh@14 -- # local bs=1048576 00:28:06.110 22:36:05 -- dd/common.sh@15 -- # local count=1 00:28:06.110 22:36:05 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:06.110 22:36:05 -- dd/common.sh@18 -- # gen_conf 00:28:06.110 22:36:05 -- dd/common.sh@31 -- # xtrace_disable 00:28:06.110 22:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:06.110 { 00:28:06.110 "subsystems": [ 00:28:06.110 { 00:28:06.110 "subsystem": "bdev", 00:28:06.110 "config": [ 00:28:06.110 { 00:28:06.110 "params": { 00:28:06.110 "trtype": "pcie", 00:28:06.110 "traddr": "0000:00:06.0", 00:28:06.110 "name": "Nvme0" 00:28:06.110 }, 00:28:06.110 "method": "bdev_nvme_attach_controller" 00:28:06.110 }, 00:28:06.110 { 00:28:06.110 "method": "bdev_wait_for_examine" 00:28:06.110 } 00:28:06.110 ] 00:28:06.110 } 00:28:06.110 ] 00:28:06.110 } 00:28:06.110 [2024-07-14 22:36:05.179226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:06.110 [2024-07-14 22:36:05.179587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137923 ] 00:28:06.369 [2024-07-14 22:36:05.345634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.628 [2024-07-14 22:36:05.499538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.822  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:07.822 00:28:07.822 ************************************ 00:28:07.822 END TEST dd_rw 00:28:07.822 ************************************ 00:28:07.822 00:28:07.822 real 0m34.552s 00:28:07.822 user 0m28.817s 00:28:07.822 sys 0m4.495s 00:28:07.822 22:36:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.822 22:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:07.822 22:36:06 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:28:07.822 22:36:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:07.822 22:36:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:07.822 22:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:07.822 ************************************ 00:28:07.822 START TEST dd_rw_offset 00:28:07.822 ************************************ 00:28:07.822 22:36:06 -- common/autotest_common.sh@1104 -- # basic_offset 00:28:07.822 22:36:06 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:28:07.822 22:36:06 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:28:07.822 22:36:06 -- dd/common.sh@98 -- # xtrace_disable 00:28:07.822 22:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:07.822 22:36:06 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:28:07.822 22:36:06 -- dd/basic_rw.sh@56 -- # data=6nbasf5prrny86w24rdd3g1qi1dx5nm0vforrmvbic2sqh4hh343cyp0ljhquordmez9rqh98o3jv0vkwf8t0puihr6vx3b4s2oqj9szeyfq2n7tdjpwk61691zcqw9aok6vg9x7oypay8a1i1mgok640yvv1rfmvi0ygl984vl0b4j7ak2cw8qykvtmqs9b528lucqzfhciq4m98vvmswy8v1sljn616nzbj86mvhnbmbdxalisuck9jsri9vgcxpkvdkd58a4b8awwhc8bbf6q6lkirpvlvz33zocdpy21q8ua1zif764yf84b1xyp70ipxzm9wket9zif20tj1bx8oln1epn68dhx654bln37jzhlg2p6d5y91gqppda1wxq7ypy1lg2153cmzc2a6p9ni8lgbmcoktpd4k3ou86ygd3kgh286m0gm0kfto8v3xrgv4cpkfbxb9cu5q9y1w9jmgjr8x18oc24xgvyoge5b6lnjne7ek7k8mzqtmocgjwilfgh1l4nk7i98n3a9jd0yph5xrm5pivk7ipaiv79hl4zz0jnhqnpecninzr6xcv9swncptx1ho60j200i9e2223m47itedzhl5ecuuh4nbccclupi7bd0vm2j9go4f4avvrmc0f8iuolrx3tu6zynel384b86n3jxgwvi32f46bc7lq8f2it8a55bfuekilq182kx462vf0onm5f2c775h01jrcxpq95ucu09n2owgziwxiwzsh32cl7h7c0b47fyiiyxs01uhshil0ink6idq2xzz8zxfucysrdp046nvg4whxp0pxhzgwnhmlee6tyhc5q300ajhcdnqe944swkt9ue8nz5f06hi294b8mrtvpnd78afbiwu5v82qovk05hpdr8n3w4762h2swuc4m96yhf16980zzgtsxnqfu66t9lbmr85zu24mlm1j0t6dnak00mt6jr7pinm6xlfn8z9e0wcob70qgul9caacj6m5ozuo51kttp48fg63nfaywtywuvk3fhptrv4ucyxwnhuazlt4iuoxt6vrvw8e6po9d5nk45z9v5fqw4j3g7scp8wymfstduvit90nd101vn21g13txtdaiosl3f28bayhxek85xde6vqartjozvqzyz0a2m8r2kxwerqzvk7butu932n41cyvd0xf97qr75r86i3fps1gx2vyihp7pykepgis2n2pefky2rm8tg1aofanogojnymi3k15jros6ofi1fgo9uf7jetuvgwsxhxw6hndp9mzm61ow6w6ubobx6cp9ikdirh4ts7qgy80w5k2wb013xc71q7nr2rhwl17nd7dz1u1mqiklgdtkxg7b3gx3uv52t6phjqnubqppg08dwyv3ypk3x8m1nclpfrndhw180vsz7uxktyok4xvbekbhjpd4qs0w26xrasax3of9jiqkxzk0slabbtfdogkoewe8smg8rzej6bzhifbqqmk2xyxbmnrg3z2adgqoc48q1z1pfuvq70y0dau0i1ml9n39wxk2w6socj10yx7l0jru3g76uvxxm4vur6btj7pryqukckxqf9lp0kupvp0yr5b1rrrhyd6mdptid6ibrjqy2jzezav4mklc43tmz93s4i2wk6i1kx6i9u33zoigk0fvppo8mxkpy6zfoikatsl54653146sm5ynesqgkkbl2yy4218jkamot83ha3gahsfok8lm0gzsroalbu4zqyp0worsl78biy965j0mhti4kzb3g9rknz04wqrc9l6twnybdwq4n60e2f72ctpee38ldngh429agduzxt3epcbfzqn1i9pyu9u6uj2spodvi5m7oegbnlx0x6kkwltqeett7uky94yht2g96egx20tq2jr0sdpo5vb40lv4yd3plu5n1fjhbzb5gyfl8q1ilg7xku1ytutc4w00s7a5f9tnin3gtz1vuxi2j8wu4h17c8avskdcj5dvm2rcuy7pcaltdh4tpgxbk1osghb46kk7wgsw7fn1juuiw0yu4kvy22b6n376awpo64ikvywckiehlbeyz4rt71u0dn43dvwu9jfnkw3o270tfnn7zece58tzwbgqu1f7f2hxhm8c8f1g6va4aykdmhohv71607sm5448hv07ce1s1dx0cdu4io2p2x4sy7vlklhkkt2idavjjb02lny218r6ds5bua9hqq275phzdvccu6yuggciv2ldcbhh464r9gv3dteqi1sz3hjkd04rpk2uz7j5p6xsc5evxfzsllztwsaflyojem2twgbtymr82624erji2ikat9tzzl0g9o33gtdpvo22o0vmjs77db5bczi995j7rbk0f0ppmr2wtrl00uyv3erzlyerph33ir9o19upqu61lc06xxkljgm55a6yx5ht9f09k274asjyntbox27y4vwgptfre0ac7ja8djwcx1kl9uq66rlrs1rg0om3jzl88yw4i6ea28oebl1ztzjg3x542dmaoml4br58hgnr98ilzijvj0wzya3kemrt50s5bcp5f5v8mzkgn1z2s81mic7yoypsb6fnweutfhcot6f0f5nai6xno7po7otwc25sr887r2o67bimi97m5oxjf3njshh82dpkaum7ltwvqtx2zy51mx2rn6kovqcp6d8sfsuve24nq24vnz83sfowuzlzbtbt12pp54umq98hv0uijn2ydo8dju3vlzenazy6pa6tsc7no748otz3wypof9dl780k6j5vikcsqp36fxgpprphit828lgw0afun01ja85t4hhq2ny1qle7w0cs8lmknah4mu1umo3qocghonaqcsacilaqabcuy8x3xjy12mk63w14d6zyeesoashaulk67zuvqtxuwa13dhajqd5xl7nu65dqzzhuxo4643zgzp5wuwhk1uwsg4cec7wnubqpe3v9tfw3ga8jf1inv6f4hzq4j5xp5n20mcf7pipcpy7xlm7g76l128qkl0vrqm5nisnepctxawr6ai4ci5tnczfidj5ibnygaqcv3344c66e5su2zb3ur3fi9xm1sk3fmmrhqj934z781ecpl3oko5mrnr1aq4970wwv0psfj5w649ffhby5yxlvlflvk705eot0l4svohmnbkibaetydu0e6i5l7yahzxv7efv68xhtnrt8idu105qoq8bhdqqsw5cxj5nvbdqur1gyuubrnhp8lzxsjx7ewatehwf2z9th3o40hgmwk34v4h37fme5t0boyo9m2lw88fpacy5pkejrehctndg8u6nzw71hr3sr07j93r6h0sosqjryfu743kvbrlx1mdmkoalp56lccqpyshaovylkknvwgva0mnjbagcg3gp6f61qyaoncjoeie90jfyxeddy1obq6u4wq2cjfuj58eqdendrjyg3pij4fjtiq6ocgrmgu573p82i52aet49x6zj6f6zxom9gid6wx7tfm3lxx3q0yyvgwnuwukph9cir1qug9tsh0al7bbccpu5a9dphfy68m8dhaj6an4tj7hosx85ohpl3lmoawiyhng50h36n2a25cl68uzp4dejyf4c3jjpurprlowmbp19wift0sfc22r2ndguki2bodzbjpbcva5vu3dao5hl58f0i3lifpzqtgbv1rvuvzhl1hpb0m097ndiqcisfa87z9cf5vgcw9wob4zl3fko16lycrblvyncpa5mm9s4mnqsum4fz6c5e6gxn49g3di50g6fqq0zm3yc2uv2zd4r6mlwwi75t8sp8na7ftp580x4h8c3e7ihyhwlfcahc60bh5n791asssxja1az3e2hz9xvebbocua08c9tiv4js0a2yz7f4mzobcvg6fxwm8px1kwwedqo1xq5f18btdi95kcpnisw0c6ac7sxdgmxo9w54s476dop6c5hgh2n6a2p3yo2oyavi8yjoldvykl0f7v28f0idpbkxwcaf7jdpc2173w9pqrbqjvgcqflk1ydwyrmmx1dylm9wkd19pd7rs6wzj7n8rv9t52o8gkhsjuf393v2m3ot 00:28:07.822 22:36:06 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:28:07.822 22:36:06 -- dd/basic_rw.sh@59 -- # gen_conf 00:28:07.822 22:36:06 -- dd/common.sh@31 -- # xtrace_disable 00:28:07.822 22:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:08.080 [2024-07-14 22:36:06.935425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:08.080 [2024-07-14 22:36:06.935814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137970 ] 00:28:08.080 { 00:28:08.080 "subsystems": [ 00:28:08.081 { 00:28:08.081 "subsystem": "bdev", 00:28:08.081 "config": [ 00:28:08.081 { 00:28:08.081 "params": { 00:28:08.081 "trtype": "pcie", 00:28:08.081 "traddr": "0000:00:06.0", 00:28:08.081 "name": "Nvme0" 00:28:08.081 }, 00:28:08.081 "method": "bdev_nvme_attach_controller" 00:28:08.081 }, 00:28:08.081 { 00:28:08.081 "method": "bdev_wait_for_examine" 00:28:08.081 } 00:28:08.081 ] 00:28:08.081 } 00:28:08.081 ] 00:28:08.081 } 00:28:08.081 [2024-07-14 22:36:07.101612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.339 [2024-07-14 22:36:07.271406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.533  Copying: 4096/4096 [B] (average 4000 kBps) 00:28:09.533 00:28:09.533 22:36:08 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:28:09.533 22:36:08 -- dd/basic_rw.sh@65 -- # gen_conf 00:28:09.533 22:36:08 -- dd/common.sh@31 -- # xtrace_disable 00:28:09.533 22:36:08 -- common/autotest_common.sh@10 -- # set +x 00:28:09.533 [2024-07-14 22:36:08.531004] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:09.533 [2024-07-14 22:36:08.531329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138001 ] 00:28:09.533 { 00:28:09.533 "subsystems": [ 00:28:09.533 { 00:28:09.533 "subsystem": "bdev", 00:28:09.533 "config": [ 00:28:09.533 { 00:28:09.533 "params": { 00:28:09.533 "trtype": "pcie", 00:28:09.533 "traddr": "0000:00:06.0", 00:28:09.533 "name": "Nvme0" 00:28:09.533 }, 00:28:09.533 "method": "bdev_nvme_attach_controller" 00:28:09.533 }, 00:28:09.533 { 00:28:09.533 "method": "bdev_wait_for_examine" 00:28:09.533 } 00:28:09.533 ] 00:28:09.533 } 00:28:09.533 ] 00:28:09.533 } 00:28:09.792 [2024-07-14 22:36:08.697802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.792 [2024-07-14 22:36:08.861126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.293  Copying: 4096/4096 [B] (average 4000 kBps) 00:28:11.293 00:28:11.293 22:36:10 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:28:11.294 22:36:10 -- dd/basic_rw.sh@72 -- # [[ 6nbasf5prrny86w24rdd3g1qi1dx5nm0vforrmvbic2sqh4hh343cyp0ljhquordmez9rqh98o3jv0vkwf8t0puihr6vx3b4s2oqj9szeyfq2n7tdjpwk61691zcqw9aok6vg9x7oypay8a1i1mgok640yvv1rfmvi0ygl984vl0b4j7ak2cw8qykvtmqs9b528lucqzfhciq4m98vvmswy8v1sljn616nzbj86mvhnbmbdxalisuck9jsri9vgcxpkvdkd58a4b8awwhc8bbf6q6lkirpvlvz33zocdpy21q8ua1zif764yf84b1xyp70ipxzm9wket9zif20tj1bx8oln1epn68dhx654bln37jzhlg2p6d5y91gqppda1wxq7ypy1lg2153cmzc2a6p9ni8lgbmcoktpd4k3ou86ygd3kgh286m0gm0kfto8v3xrgv4cpkfbxb9cu5q9y1w9jmgjr8x18oc24xgvyoge5b6lnjne7ek7k8mzqtmocgjwilfgh1l4nk7i98n3a9jd0yph5xrm5pivk7ipaiv79hl4zz0jnhqnpecninzr6xcv9swncptx1ho60j200i9e2223m47itedzhl5ecuuh4nbccclupi7bd0vm2j9go4f4avvrmc0f8iuolrx3tu6zynel384b86n3jxgwvi32f46bc7lq8f2it8a55bfuekilq182kx462vf0onm5f2c775h01jrcxpq95ucu09n2owgziwxiwzsh32cl7h7c0b47fyiiyxs01uhshil0ink6idq2xzz8zxfucysrdp046nvg4whxp0pxhzgwnhmlee6tyhc5q300ajhcdnqe944swkt9ue8nz5f06hi294b8mrtvpnd78afbiwu5v82qovk05hpdr8n3w4762h2swuc4m96yhf16980zzgtsxnqfu66t9lbmr85zu24mlm1j0t6dnak00mt6jr7pinm6xlfn8z9e0wcob70qgul9caacj6m5ozuo51kttp48fg63nfaywtywuvk3fhptrv4ucyxwnhuazlt4iuoxt6vrvw8e6po9d5nk45z9v5fqw4j3g7scp8wymfstduvit90nd101vn21g13txtdaiosl3f28bayhxek85xde6vqartjozvqzyz0a2m8r2kxwerqzvk7butu932n41cyvd0xf97qr75r86i3fps1gx2vyihp7pykepgis2n2pefky2rm8tg1aofanogojnymi3k15jros6ofi1fgo9uf7jetuvgwsxhxw6hndp9mzm61ow6w6ubobx6cp9ikdirh4ts7qgy80w5k2wb013xc71q7nr2rhwl17nd7dz1u1mqiklgdtkxg7b3gx3uv52t6phjqnubqppg08dwyv3ypk3x8m1nclpfrndhw180vsz7uxktyok4xvbekbhjpd4qs0w26xrasax3of9jiqkxzk0slabbtfdogkoewe8smg8rzej6bzhifbqqmk2xyxbmnrg3z2adgqoc48q1z1pfuvq70y0dau0i1ml9n39wxk2w6socj10yx7l0jru3g76uvxxm4vur6btj7pryqukckxqf9lp0kupvp0yr5b1rrrhyd6mdptid6ibrjqy2jzezav4mklc43tmz93s4i2wk6i1kx6i9u33zoigk0fvppo8mxkpy6zfoikatsl54653146sm5ynesqgkkbl2yy4218jkamot83ha3gahsfok8lm0gzsroalbu4zqyp0worsl78biy965j0mhti4kzb3g9rknz04wqrc9l6twnybdwq4n60e2f72ctpee38ldngh429agduzxt3epcbfzqn1i9pyu9u6uj2spodvi5m7oegbnlx0x6kkwltqeett7uky94yht2g96egx20tq2jr0sdpo5vb40lv4yd3plu5n1fjhbzb5gyfl8q1ilg7xku1ytutc4w00s7a5f9tnin3gtz1vuxi2j8wu4h17c8avskdcj5dvm2rcuy7pcaltdh4tpgxbk1osghb46kk7wgsw7fn1juuiw0yu4kvy22b6n376awpo64ikvywckiehlbeyz4rt71u0dn43dvwu9jfnkw3o270tfnn7zece58tzwbgqu1f7f2hxhm8c8f1g6va4aykdmhohv71607sm5448hv07ce1s1dx0cdu4io2p2x4sy7vlklhkkt2idavjjb02lny218r6ds5bua9hqq275phzdvccu6yuggciv2ldcbhh464r9gv3dteqi1sz3hjkd04rpk2uz7j5p6xsc5evxfzsllztwsaflyojem2twgbtymr82624erji2ikat9tzzl0g9o33gtdpvo22o0vmjs77db5bczi995j7rbk0f0ppmr2wtrl00uyv3erzlyerph33ir9o19upqu61lc06xxkljgm55a6yx5ht9f09k274asjyntbox27y4vwgptfre0ac7ja8djwcx1kl9uq66rlrs1rg0om3jzl88yw4i6ea28oebl1ztzjg3x542dmaoml4br58hgnr98ilzijvj0wzya3kemrt50s5bcp5f5v8mzkgn1z2s81mic7yoypsb6fnweutfhcot6f0f5nai6xno7po7otwc25sr887r2o67bimi97m5oxjf3njshh82dpkaum7ltwvqtx2zy51mx2rn6kovqcp6d8sfsuve24nq24vnz83sfowuzlzbtbt12pp54umq98hv0uijn2ydo8dju3vlzenazy6pa6tsc7no748otz3wypof9dl780k6j5vikcsqp36fxgpprphit828lgw0afun01ja85t4hhq2ny1qle7w0cs8lmknah4mu1umo3qocghonaqcsacilaqabcuy8x3xjy12mk63w14d6zyeesoashaulk67zuvqtxuwa13dhajqd5xl7nu65dqzzhuxo4643zgzp5wuwhk1uwsg4cec7wnubqpe3v9tfw3ga8jf1inv6f4hzq4j5xp5n20mcf7pipcpy7xlm7g76l128qkl0vrqm5nisnepctxawr6ai4ci5tnczfidj5ibnygaqcv3344c66e5su2zb3ur3fi9xm1sk3fmmrhqj934z781ecpl3oko5mrnr1aq4970wwv0psfj5w649ffhby5yxlvlflvk705eot0l4svohmnbkibaetydu0e6i5l7yahzxv7efv68xhtnrt8idu105qoq8bhdqqsw5cxj5nvbdqur1gyuubrnhp8lzxsjx7ewatehwf2z9th3o40hgmwk34v4h37fme5t0boyo9m2lw88fpacy5pkejrehctndg8u6nzw71hr3sr07j93r6h0sosqjryfu743kvbrlx1mdmkoalp56lccqpyshaovylkknvwgva0mnjbagcg3gp6f61qyaoncjoeie90jfyxeddy1obq6u4wq2cjfuj58eqdendrjyg3pij4fjtiq6ocgrmgu573p82i52aet49x6zj6f6zxom9gid6wx7tfm3lxx3q0yyvgwnuwukph9cir1qug9tsh0al7bbccpu5a9dphfy68m8dhaj6an4tj7hosx85ohpl3lmoawiyhng50h36n2a25cl68uzp4dejyf4c3jjpurprlowmbp19wift0sfc22r2ndguki2bodzbjpbcva5vu3dao5hl58f0i3lifpzqtgbv1rvuvzhl1hpb0m097ndiqcisfa87z9cf5vgcw9wob4zl3fko16lycrblvyncpa5mm9s4mnqsum4fz6c5e6gxn49g3di50g6fqq0zm3yc2uv2zd4r6mlwwi75t8sp8na7ftp580x4h8c3e7ihyhwlfcahc60bh5n791asssxja1az3e2hz9xvebbocua08c9tiv4js0a2yz7f4mzobcvg6fxwm8px1kwwedqo1xq5f18btdi95kcpnisw0c6ac7sxdgmxo9w54s476dop6c5hgh2n6a2p3yo2oyavi8yjoldvykl0f7v28f0idpbkxwcaf7jdpc2173w9pqrbqjvgcqflk1ydwyrmmx1dylm9wkd19pd7rs6wzj7n8rv9t52o8gkhsjuf393v2m3ot == \6\n\b\a\s\f\5\p\r\r\n\y\8\6\w\2\4\r\d\d\3\g\1\q\i\1\d\x\5\n\m\0\v\f\o\r\r\m\v\b\i\c\2\s\q\h\4\h\h\3\4\3\c\y\p\0\l\j\h\q\u\o\r\d\m\e\z\9\r\q\h\9\8\o\3\j\v\0\v\k\w\f\8\t\0\p\u\i\h\r\6\v\x\3\b\4\s\2\o\q\j\9\s\z\e\y\f\q\2\n\7\t\d\j\p\w\k\6\1\6\9\1\z\c\q\w\9\a\o\k\6\v\g\9\x\7\o\y\p\a\y\8\a\1\i\1\m\g\o\k\6\4\0\y\v\v\1\r\f\m\v\i\0\y\g\l\9\8\4\v\l\0\b\4\j\7\a\k\2\c\w\8\q\y\k\v\t\m\q\s\9\b\5\2\8\l\u\c\q\z\f\h\c\i\q\4\m\9\8\v\v\m\s\w\y\8\v\1\s\l\j\n\6\1\6\n\z\b\j\8\6\m\v\h\n\b\m\b\d\x\a\l\i\s\u\c\k\9\j\s\r\i\9\v\g\c\x\p\k\v\d\k\d\5\8\a\4\b\8\a\w\w\h\c\8\b\b\f\6\q\6\l\k\i\r\p\v\l\v\z\3\3\z\o\c\d\p\y\2\1\q\8\u\a\1\z\i\f\7\6\4\y\f\8\4\b\1\x\y\p\7\0\i\p\x\z\m\9\w\k\e\t\9\z\i\f\2\0\t\j\1\b\x\8\o\l\n\1\e\p\n\6\8\d\h\x\6\5\4\b\l\n\3\7\j\z\h\l\g\2\p\6\d\5\y\9\1\g\q\p\p\d\a\1\w\x\q\7\y\p\y\1\l\g\2\1\5\3\c\m\z\c\2\a\6\p\9\n\i\8\l\g\b\m\c\o\k\t\p\d\4\k\3\o\u\8\6\y\g\d\3\k\g\h\2\8\6\m\0\g\m\0\k\f\t\o\8\v\3\x\r\g\v\4\c\p\k\f\b\x\b\9\c\u\5\q\9\y\1\w\9\j\m\g\j\r\8\x\1\8\o\c\2\4\x\g\v\y\o\g\e\5\b\6\l\n\j\n\e\7\e\k\7\k\8\m\z\q\t\m\o\c\g\j\w\i\l\f\g\h\1\l\4\n\k\7\i\9\8\n\3\a\9\j\d\0\y\p\h\5\x\r\m\5\p\i\v\k\7\i\p\a\i\v\7\9\h\l\4\z\z\0\j\n\h\q\n\p\e\c\n\i\n\z\r\6\x\c\v\9\s\w\n\c\p\t\x\1\h\o\6\0\j\2\0\0\i\9\e\2\2\2\3\m\4\7\i\t\e\d\z\h\l\5\e\c\u\u\h\4\n\b\c\c\c\l\u\p\i\7\b\d\0\v\m\2\j\9\g\o\4\f\4\a\v\v\r\m\c\0\f\8\i\u\o\l\r\x\3\t\u\6\z\y\n\e\l\3\8\4\b\8\6\n\3\j\x\g\w\v\i\3\2\f\4\6\b\c\7\l\q\8\f\2\i\t\8\a\5\5\b\f\u\e\k\i\l\q\1\8\2\k\x\4\6\2\v\f\0\o\n\m\5\f\2\c\7\7\5\h\0\1\j\r\c\x\p\q\9\5\u\c\u\0\9\n\2\o\w\g\z\i\w\x\i\w\z\s\h\3\2\c\l\7\h\7\c\0\b\4\7\f\y\i\i\y\x\s\0\1\u\h\s\h\i\l\0\i\n\k\6\i\d\q\2\x\z\z\8\z\x\f\u\c\y\s\r\d\p\0\4\6\n\v\g\4\w\h\x\p\0\p\x\h\z\g\w\n\h\m\l\e\e\6\t\y\h\c\5\q\3\0\0\a\j\h\c\d\n\q\e\9\4\4\s\w\k\t\9\u\e\8\n\z\5\f\0\6\h\i\2\9\4\b\8\m\r\t\v\p\n\d\7\8\a\f\b\i\w\u\5\v\8\2\q\o\v\k\0\5\h\p\d\r\8\n\3\w\4\7\6\2\h\2\s\w\u\c\4\m\9\6\y\h\f\1\6\9\8\0\z\z\g\t\s\x\n\q\f\u\6\6\t\9\l\b\m\r\8\5\z\u\2\4\m\l\m\1\j\0\t\6\d\n\a\k\0\0\m\t\6\j\r\7\p\i\n\m\6\x\l\f\n\8\z\9\e\0\w\c\o\b\7\0\q\g\u\l\9\c\a\a\c\j\6\m\5\o\z\u\o\5\1\k\t\t\p\4\8\f\g\6\3\n\f\a\y\w\t\y\w\u\v\k\3\f\h\p\t\r\v\4\u\c\y\x\w\n\h\u\a\z\l\t\4\i\u\o\x\t\6\v\r\v\w\8\e\6\p\o\9\d\5\n\k\4\5\z\9\v\5\f\q\w\4\j\3\g\7\s\c\p\8\w\y\m\f\s\t\d\u\v\i\t\9\0\n\d\1\0\1\v\n\2\1\g\1\3\t\x\t\d\a\i\o\s\l\3\f\2\8\b\a\y\h\x\e\k\8\5\x\d\e\6\v\q\a\r\t\j\o\z\v\q\z\y\z\0\a\2\m\8\r\2\k\x\w\e\r\q\z\v\k\7\b\u\t\u\9\3\2\n\4\1\c\y\v\d\0\x\f\9\7\q\r\7\5\r\8\6\i\3\f\p\s\1\g\x\2\v\y\i\h\p\7\p\y\k\e\p\g\i\s\2\n\2\p\e\f\k\y\2\r\m\8\t\g\1\a\o\f\a\n\o\g\o\j\n\y\m\i\3\k\1\5\j\r\o\s\6\o\f\i\1\f\g\o\9\u\f\7\j\e\t\u\v\g\w\s\x\h\x\w\6\h\n\d\p\9\m\z\m\6\1\o\w\6\w\6\u\b\o\b\x\6\c\p\9\i\k\d\i\r\h\4\t\s\7\q\g\y\8\0\w\5\k\2\w\b\0\1\3\x\c\7\1\q\7\n\r\2\r\h\w\l\1\7\n\d\7\d\z\1\u\1\m\q\i\k\l\g\d\t\k\x\g\7\b\3\g\x\3\u\v\5\2\t\6\p\h\j\q\n\u\b\q\p\p\g\0\8\d\w\y\v\3\y\p\k\3\x\8\m\1\n\c\l\p\f\r\n\d\h\w\1\8\0\v\s\z\7\u\x\k\t\y\o\k\4\x\v\b\e\k\b\h\j\p\d\4\q\s\0\w\2\6\x\r\a\s\a\x\3\o\f\9\j\i\q\k\x\z\k\0\s\l\a\b\b\t\f\d\o\g\k\o\e\w\e\8\s\m\g\8\r\z\e\j\6\b\z\h\i\f\b\q\q\m\k\2\x\y\x\b\m\n\r\g\3\z\2\a\d\g\q\o\c\4\8\q\1\z\1\p\f\u\v\q\7\0\y\0\d\a\u\0\i\1\m\l\9\n\3\9\w\x\k\2\w\6\s\o\c\j\1\0\y\x\7\l\0\j\r\u\3\g\7\6\u\v\x\x\m\4\v\u\r\6\b\t\j\7\p\r\y\q\u\k\c\k\x\q\f\9\l\p\0\k\u\p\v\p\0\y\r\5\b\1\r\r\r\h\y\d\6\m\d\p\t\i\d\6\i\b\r\j\q\y\2\j\z\e\z\a\v\4\m\k\l\c\4\3\t\m\z\9\3\s\4\i\2\w\k\6\i\1\k\x\6\i\9\u\3\3\z\o\i\g\k\0\f\v\p\p\o\8\m\x\k\p\y\6\z\f\o\i\k\a\t\s\l\5\4\6\5\3\1\4\6\s\m\5\y\n\e\s\q\g\k\k\b\l\2\y\y\4\2\1\8\j\k\a\m\o\t\8\3\h\a\3\g\a\h\s\f\o\k\8\l\m\0\g\z\s\r\o\a\l\b\u\4\z\q\y\p\0\w\o\r\s\l\7\8\b\i\y\9\6\5\j\0\m\h\t\i\4\k\z\b\3\g\9\r\k\n\z\0\4\w\q\r\c\9\l\6\t\w\n\y\b\d\w\q\4\n\6\0\e\2\f\7\2\c\t\p\e\e\3\8\l\d\n\g\h\4\2\9\a\g\d\u\z\x\t\3\e\p\c\b\f\z\q\n\1\i\9\p\y\u\9\u\6\u\j\2\s\p\o\d\v\i\5\m\7\o\e\g\b\n\l\x\0\x\6\k\k\w\l\t\q\e\e\t\t\7\u\k\y\9\4\y\h\t\2\g\9\6\e\g\x\2\0\t\q\2\j\r\0\s\d\p\o\5\v\b\4\0\l\v\4\y\d\3\p\l\u\5\n\1\f\j\h\b\z\b\5\g\y\f\l\8\q\1\i\l\g\7\x\k\u\1\y\t\u\t\c\4\w\0\0\s\7\a\5\f\9\t\n\i\n\3\g\t\z\1\v\u\x\i\2\j\8\w\u\4\h\1\7\c\8\a\v\s\k\d\c\j\5\d\v\m\2\r\c\u\y\7\p\c\a\l\t\d\h\4\t\p\g\x\b\k\1\o\s\g\h\b\4\6\k\k\7\w\g\s\w\7\f\n\1\j\u\u\i\w\0\y\u\4\k\v\y\2\2\b\6\n\3\7\6\a\w\p\o\6\4\i\k\v\y\w\c\k\i\e\h\l\b\e\y\z\4\r\t\7\1\u\0\d\n\4\3\d\v\w\u\9\j\f\n\k\w\3\o\2\7\0\t\f\n\n\7\z\e\c\e\5\8\t\z\w\b\g\q\u\1\f\7\f\2\h\x\h\m\8\c\8\f\1\g\6\v\a\4\a\y\k\d\m\h\o\h\v\7\1\6\0\7\s\m\5\4\4\8\h\v\0\7\c\e\1\s\1\d\x\0\c\d\u\4\i\o\2\p\2\x\4\s\y\7\v\l\k\l\h\k\k\t\2\i\d\a\v\j\j\b\0\2\l\n\y\2\1\8\r\6\d\s\5\b\u\a\9\h\q\q\2\7\5\p\h\z\d\v\c\c\u\6\y\u\g\g\c\i\v\2\l\d\c\b\h\h\4\6\4\r\9\g\v\3\d\t\e\q\i\1\s\z\3\h\j\k\d\0\4\r\p\k\2\u\z\7\j\5\p\6\x\s\c\5\e\v\x\f\z\s\l\l\z\t\w\s\a\f\l\y\o\j\e\m\2\t\w\g\b\t\y\m\r\8\2\6\2\4\e\r\j\i\2\i\k\a\t\9\t\z\z\l\0\g\9\o\3\3\g\t\d\p\v\o\2\2\o\0\v\m\j\s\7\7\d\b\5\b\c\z\i\9\9\5\j\7\r\b\k\0\f\0\p\p\m\r\2\w\t\r\l\0\0\u\y\v\3\e\r\z\l\y\e\r\p\h\3\3\i\r\9\o\1\9\u\p\q\u\6\1\l\c\0\6\x\x\k\l\j\g\m\5\5\a\6\y\x\5\h\t\9\f\0\9\k\2\7\4\a\s\j\y\n\t\b\o\x\2\7\y\4\v\w\g\p\t\f\r\e\0\a\c\7\j\a\8\d\j\w\c\x\1\k\l\9\u\q\6\6\r\l\r\s\1\r\g\0\o\m\3\j\z\l\8\8\y\w\4\i\6\e\a\2\8\o\e\b\l\1\z\t\z\j\g\3\x\5\4\2\d\m\a\o\m\l\4\b\r\5\8\h\g\n\r\9\8\i\l\z\i\j\v\j\0\w\z\y\a\3\k\e\m\r\t\5\0\s\5\b\c\p\5\f\5\v\8\m\z\k\g\n\1\z\2\s\8\1\m\i\c\7\y\o\y\p\s\b\6\f\n\w\e\u\t\f\h\c\o\t\6\f\0\f\5\n\a\i\6\x\n\o\7\p\o\7\o\t\w\c\2\5\s\r\8\8\7\r\2\o\6\7\b\i\m\i\9\7\m\5\o\x\j\f\3\n\j\s\h\h\8\2\d\p\k\a\u\m\7\l\t\w\v\q\t\x\2\z\y\5\1\m\x\2\r\n\6\k\o\v\q\c\p\6\d\8\s\f\s\u\v\e\2\4\n\q\2\4\v\n\z\8\3\s\f\o\w\u\z\l\z\b\t\b\t\1\2\p\p\5\4\u\m\q\9\8\h\v\0\u\i\j\n\2\y\d\o\8\d\j\u\3\v\l\z\e\n\a\z\y\6\p\a\6\t\s\c\7\n\o\7\4\8\o\t\z\3\w\y\p\o\f\9\d\l\7\8\0\k\6\j\5\v\i\k\c\s\q\p\3\6\f\x\g\p\p\r\p\h\i\t\8\2\8\l\g\w\0\a\f\u\n\0\1\j\a\8\5\t\4\h\h\q\2\n\y\1\q\l\e\7\w\0\c\s\8\l\m\k\n\a\h\4\m\u\1\u\m\o\3\q\o\c\g\h\o\n\a\q\c\s\a\c\i\l\a\q\a\b\c\u\y\8\x\3\x\j\y\1\2\m\k\6\3\w\1\4\d\6\z\y\e\e\s\o\a\s\h\a\u\l\k\6\7\z\u\v\q\t\x\u\w\a\1\3\d\h\a\j\q\d\5\x\l\7\n\u\6\5\d\q\z\z\h\u\x\o\4\6\4\3\z\g\z\p\5\w\u\w\h\k\1\u\w\s\g\4\c\e\c\7\w\n\u\b\q\p\e\3\v\9\t\f\w\3\g\a\8\j\f\1\i\n\v\6\f\4\h\z\q\4\j\5\x\p\5\n\2\0\m\c\f\7\p\i\p\c\p\y\7\x\l\m\7\g\7\6\l\1\2\8\q\k\l\0\v\r\q\m\5\n\i\s\n\e\p\c\t\x\a\w\r\6\a\i\4\c\i\5\t\n\c\z\f\i\d\j\5\i\b\n\y\g\a\q\c\v\3\3\4\4\c\6\6\e\5\s\u\2\z\b\3\u\r\3\f\i\9\x\m\1\s\k\3\f\m\m\r\h\q\j\9\3\4\z\7\8\1\e\c\p\l\3\o\k\o\5\m\r\n\r\1\a\q\4\9\7\0\w\w\v\0\p\s\f\j\5\w\6\4\9\f\f\h\b\y\5\y\x\l\v\l\f\l\v\k\7\0\5\e\o\t\0\l\4\s\v\o\h\m\n\b\k\i\b\a\e\t\y\d\u\0\e\6\i\5\l\7\y\a\h\z\x\v\7\e\f\v\6\8\x\h\t\n\r\t\8\i\d\u\1\0\5\q\o\q\8\b\h\d\q\q\s\w\5\c\x\j\5\n\v\b\d\q\u\r\1\g\y\u\u\b\r\n\h\p\8\l\z\x\s\j\x\7\e\w\a\t\e\h\w\f\2\z\9\t\h\3\o\4\0\h\g\m\w\k\3\4\v\4\h\3\7\f\m\e\5\t\0\b\o\y\o\9\m\2\l\w\8\8\f\p\a\c\y\5\p\k\e\j\r\e\h\c\t\n\d\g\8\u\6\n\z\w\7\1\h\r\3\s\r\0\7\j\9\3\r\6\h\0\s\o\s\q\j\r\y\f\u\7\4\3\k\v\b\r\l\x\1\m\d\m\k\o\a\l\p\5\6\l\c\c\q\p\y\s\h\a\o\v\y\l\k\k\n\v\w\g\v\a\0\m\n\j\b\a\g\c\g\3\g\p\6\f\6\1\q\y\a\o\n\c\j\o\e\i\e\9\0\j\f\y\x\e\d\d\y\1\o\b\q\6\u\4\w\q\2\c\j\f\u\j\5\8\e\q\d\e\n\d\r\j\y\g\3\p\i\j\4\f\j\t\i\q\6\o\c\g\r\m\g\u\5\7\3\p\8\2\i\5\2\a\e\t\4\9\x\6\z\j\6\f\6\z\x\o\m\9\g\i\d\6\w\x\7\t\f\m\3\l\x\x\3\q\0\y\y\v\g\w\n\u\w\u\k\p\h\9\c\i\r\1\q\u\g\9\t\s\h\0\a\l\7\b\b\c\c\p\u\5\a\9\d\p\h\f\y\6\8\m\8\d\h\a\j\6\a\n\4\t\j\7\h\o\s\x\8\5\o\h\p\l\3\l\m\o\a\w\i\y\h\n\g\5\0\h\3\6\n\2\a\2\5\c\l\6\8\u\z\p\4\d\e\j\y\f\4\c\3\j\j\p\u\r\p\r\l\o\w\m\b\p\1\9\w\i\f\t\0\s\f\c\2\2\r\2\n\d\g\u\k\i\2\b\o\d\z\b\j\p\b\c\v\a\5\v\u\3\d\a\o\5\h\l\5\8\f\0\i\3\l\i\f\p\z\q\t\g\b\v\1\r\v\u\v\z\h\l\1\h\p\b\0\m\0\9\7\n\d\i\q\c\i\s\f\a\8\7\z\9\c\f\5\v\g\c\w\9\w\o\b\4\z\l\3\f\k\o\1\6\l\y\c\r\b\l\v\y\n\c\p\a\5\m\m\9\s\4\m\n\q\s\u\m\4\f\z\6\c\5\e\6\g\x\n\4\9\g\3\d\i\5\0\g\6\f\q\q\0\z\m\3\y\c\2\u\v\2\z\d\4\r\6\m\l\w\w\i\7\5\t\8\s\p\8\n\a\7\f\t\p\5\8\0\x\4\h\8\c\3\e\7\i\h\y\h\w\l\f\c\a\h\c\6\0\b\h\5\n\7\9\1\a\s\s\s\x\j\a\1\a\z\3\e\2\h\z\9\x\v\e\b\b\o\c\u\a\0\8\c\9\t\i\v\4\j\s\0\a\2\y\z\7\f\4\m\z\o\b\c\v\g\6\f\x\w\m\8\p\x\1\k\w\w\e\d\q\o\1\x\q\5\f\1\8\b\t\d\i\9\5\k\c\p\n\i\s\w\0\c\6\a\c\7\s\x\d\g\m\x\o\9\w\5\4\s\4\7\6\d\o\p\6\c\5\h\g\h\2\n\6\a\2\p\3\y\o\2\o\y\a\v\i\8\y\j\o\l\d\v\y\k\l\0\f\7\v\2\8\f\0\i\d\p\b\k\x\w\c\a\f\7\j\d\p\c\2\1\7\3\w\9\p\q\r\b\q\j\v\g\c\q\f\l\k\1\y\d\w\y\r\m\m\x\1\d\y\l\m\9\w\k\d\1\9\p\d\7\r\s\6\w\z\j\7\n\8\r\v\9\t\5\2\o\8\g\k\h\s\j\u\f\3\9\3\v\2\m\3\o\t ]] 00:28:11.294 00:28:11.294 real 0m3.328s 00:28:11.294 user 0m2.745s 00:28:11.294 sys 0m0.456s 00:28:11.294 22:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.294 22:36:10 -- common/autotest_common.sh@10 -- # set +x 00:28:11.294 ************************************ 00:28:11.294 END TEST dd_rw_offset 00:28:11.294 ************************************ 00:28:11.294 22:36:10 -- dd/basic_rw.sh@1 -- # cleanup 00:28:11.294 22:36:10 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:28:11.294 22:36:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:11.294 22:36:10 -- dd/common.sh@11 -- # local nvme_ref= 00:28:11.294 22:36:10 -- dd/common.sh@12 -- # local size=0xffff 00:28:11.294 22:36:10 -- dd/common.sh@14 -- # local bs=1048576 00:28:11.294 22:36:10 -- dd/common.sh@15 -- # local count=1 00:28:11.294 22:36:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:11.294 22:36:10 -- dd/common.sh@18 -- # gen_conf 00:28:11.294 22:36:10 -- dd/common.sh@31 -- # xtrace_disable 00:28:11.294 22:36:10 -- common/autotest_common.sh@10 -- # set +x 00:28:11.294 [2024-07-14 22:36:10.250948] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:11.294 [2024-07-14 22:36:10.251330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138041 ] 00:28:11.294 { 00:28:11.294 "subsystems": [ 00:28:11.294 { 00:28:11.294 "subsystem": "bdev", 00:28:11.294 "config": [ 00:28:11.294 { 00:28:11.294 "params": { 00:28:11.294 "trtype": "pcie", 00:28:11.294 "traddr": "0000:00:06.0", 00:28:11.294 "name": "Nvme0" 00:28:11.294 }, 00:28:11.294 "method": "bdev_nvme_attach_controller" 00:28:11.294 }, 00:28:11.294 { 00:28:11.294 "method": "bdev_wait_for_examine" 00:28:11.294 } 00:28:11.294 ] 00:28:11.294 } 00:28:11.294 ] 00:28:11.294 } 00:28:11.553 [2024-07-14 22:36:10.418061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.553 [2024-07-14 22:36:10.589609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.712  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:12.712 00:28:12.972 22:36:11 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:12.972 ************************************ 00:28:12.972 END TEST spdk_dd_basic_rw 00:28:12.972 ************************************ 00:28:12.972 00:28:12.972 real 0m41.756s 00:28:12.972 user 0m34.533s 00:28:12.972 sys 0m5.661s 00:28:12.972 22:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.972 22:36:11 -- common/autotest_common.sh@10 -- # set +x 00:28:12.972 22:36:11 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:28:12.972 22:36:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:12.972 22:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.972 22:36:11 -- common/autotest_common.sh@10 -- # set +x 00:28:12.972 ************************************ 00:28:12.972 START TEST spdk_dd_posix 00:28:12.972 ************************************ 00:28:12.972 22:36:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:28:12.972 * Looking for test storage... 00:28:12.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:12.972 22:36:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:12.972 22:36:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.972 22:36:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.972 22:36:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.972 22:36:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.972 22:36:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.972 22:36:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.972 22:36:11 -- paths/export.sh@5 -- # export PATH 00:28:12.972 22:36:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.972 22:36:11 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:28:12.972 22:36:11 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:28:12.972 22:36:11 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:28:12.972 22:36:11 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:28:12.972 22:36:11 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:12.972 22:36:11 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:12.972 22:36:11 -- dd/posix.sh@130 -- # tests 00:28:12.972 22:36:11 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:28:12.972 * First test run, using AIO 00:28:12.972 22:36:11 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:28:12.972 22:36:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:12.972 22:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.972 22:36:11 -- common/autotest_common.sh@10 -- # set +x 00:28:12.972 ************************************ 00:28:12.972 START TEST dd_flag_append 00:28:12.972 ************************************ 00:28:12.972 22:36:11 -- common/autotest_common.sh@1104 -- # append 00:28:12.972 22:36:11 -- dd/posix.sh@16 -- # local dump0 00:28:12.972 22:36:11 -- dd/posix.sh@17 -- # local dump1 00:28:12.972 22:36:11 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:12.972 22:36:11 -- dd/common.sh@98 -- # xtrace_disable 00:28:12.972 22:36:11 -- common/autotest_common.sh@10 -- # set +x 00:28:12.972 22:36:11 -- dd/posix.sh@19 -- # dump0=6bnrxwsvlb5uq2o5shgljvuw27uppn72 00:28:12.972 22:36:11 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:12.972 22:36:11 -- dd/common.sh@98 -- # xtrace_disable 00:28:12.972 22:36:11 -- common/autotest_common.sh@10 -- # set +x 00:28:12.972 22:36:11 -- dd/posix.sh@20 -- # dump1=biaa0jb5aebwhjm7vqlw38nv5zmnk3ge 00:28:12.972 22:36:11 -- dd/posix.sh@22 -- # printf %s 6bnrxwsvlb5uq2o5shgljvuw27uppn72 00:28:12.972 22:36:11 -- dd/posix.sh@23 -- # printf %s biaa0jb5aebwhjm7vqlw38nv5zmnk3ge 00:28:12.972 22:36:11 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:12.972 [2024-07-14 22:36:12.007339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:12.972 [2024-07-14 22:36:12.007718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138131 ] 00:28:13.231 [2024-07-14 22:36:12.151214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.490 [2024-07-14 22:36:12.319934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.688  Copying: 32/32 [B] (average 31 kBps) 00:28:14.688 00:28:14.688 ************************************ 00:28:14.688 END TEST dd_flag_append 00:28:14.689 ************************************ 00:28:14.689 22:36:13 -- dd/posix.sh@27 -- # [[ biaa0jb5aebwhjm7vqlw38nv5zmnk3ge6bnrxwsvlb5uq2o5shgljvuw27uppn72 == \b\i\a\a\0\j\b\5\a\e\b\w\h\j\m\7\v\q\l\w\3\8\n\v\5\z\m\n\k\3\g\e\6\b\n\r\x\w\s\v\l\b\5\u\q\2\o\5\s\h\g\l\j\v\u\w\2\7\u\p\p\n\7\2 ]] 00:28:14.689 00:28:14.689 real 0m1.564s 00:28:14.689 user 0m1.259s 00:28:14.689 sys 0m0.172s 00:28:14.689 22:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.689 22:36:13 -- common/autotest_common.sh@10 -- # set +x 00:28:14.689 22:36:13 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:28:14.689 22:36:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:14.689 22:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:14.689 22:36:13 -- common/autotest_common.sh@10 -- # set +x 00:28:14.689 ************************************ 00:28:14.689 START TEST dd_flag_directory 00:28:14.689 ************************************ 00:28:14.689 22:36:13 -- common/autotest_common.sh@1104 -- # directory 00:28:14.689 22:36:13 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:14.689 22:36:13 -- common/autotest_common.sh@640 -- # local es=0 00:28:14.689 22:36:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:14.689 22:36:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.689 22:36:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:14.689 22:36:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.689 22:36:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:14.689 22:36:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.689 22:36:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:14.689 22:36:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.689 22:36:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:14.689 22:36:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:14.689 [2024-07-14 22:36:13.637553] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:14.689 [2024-07-14 22:36:13.637931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138179 ] 00:28:14.946 [2024-07-14 22:36:13.803664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.946 [2024-07-14 22:36:13.967572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.203 [2024-07-14 22:36:14.214200] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:15.203 [2024-07-14 22:36:14.214526] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:15.203 [2024-07-14 22:36:14.214580] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:15.769 [2024-07-14 22:36:14.805433] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:16.336 22:36:15 -- common/autotest_common.sh@643 -- # es=236 00:28:16.336 22:36:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:16.336 22:36:15 -- common/autotest_common.sh@652 -- # es=108 00:28:16.336 22:36:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:16.336 22:36:15 -- common/autotest_common.sh@660 -- # es=1 00:28:16.336 22:36:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:16.336 22:36:15 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:16.336 22:36:15 -- common/autotest_common.sh@640 -- # local es=0 00:28:16.336 22:36:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:16.336 22:36:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.336 22:36:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:16.336 22:36:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.336 22:36:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:16.336 22:36:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.336 22:36:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:16.336 22:36:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.336 22:36:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:16.336 22:36:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:16.336 [2024-07-14 22:36:15.206678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:16.336 [2024-07-14 22:36:15.207558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138204 ] 00:28:16.336 [2024-07-14 22:36:15.375667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.594 [2024-07-14 22:36:15.540535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.852 [2024-07-14 22:36:15.786086] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:16.852 [2024-07-14 22:36:15.786427] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:16.852 [2024-07-14 22:36:15.786483] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:17.420 [2024-07-14 22:36:16.373676] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:17.678 ************************************ 00:28:17.678 END TEST dd_flag_directory 00:28:17.678 ************************************ 00:28:17.678 22:36:16 -- common/autotest_common.sh@643 -- # es=236 00:28:17.678 22:36:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:17.678 22:36:16 -- common/autotest_common.sh@652 -- # es=108 00:28:17.678 22:36:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:17.678 22:36:16 -- common/autotest_common.sh@660 -- # es=1 00:28:17.678 22:36:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:17.678 00:28:17.678 real 0m3.136s 00:28:17.678 user 0m2.495s 00:28:17.678 sys 0m0.435s 00:28:17.678 22:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.678 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:28:17.678 22:36:16 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:28:17.678 22:36:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:17.678 22:36:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:17.678 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:28:17.937 ************************************ 00:28:17.937 START TEST dd_flag_nofollow 00:28:17.937 ************************************ 00:28:17.937 22:36:16 -- common/autotest_common.sh@1104 -- # nofollow 00:28:17.937 22:36:16 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:17.937 22:36:16 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:17.937 22:36:16 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:17.937 22:36:16 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:17.937 22:36:16 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:17.937 22:36:16 -- common/autotest_common.sh@640 -- # local es=0 00:28:17.937 22:36:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:17.937 22:36:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.937 22:36:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:17.937 22:36:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.937 22:36:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:17.937 22:36:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.937 22:36:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:17.937 22:36:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.937 22:36:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:17.937 22:36:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:17.937 [2024-07-14 22:36:16.829115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:17.937 [2024-07-14 22:36:16.829506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138250 ] 00:28:17.937 [2024-07-14 22:36:16.997619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.195 [2024-07-14 22:36:17.154843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.454 [2024-07-14 22:36:17.401722] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:18.454 [2024-07-14 22:36:17.401931] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:18.454 [2024-07-14 22:36:17.402000] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:19.022 [2024-07-14 22:36:17.975631] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:19.281 22:36:18 -- common/autotest_common.sh@643 -- # es=216 00:28:19.281 22:36:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:19.281 22:36:18 -- common/autotest_common.sh@652 -- # es=88 00:28:19.281 22:36:18 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:19.281 22:36:18 -- common/autotest_common.sh@660 -- # es=1 00:28:19.281 22:36:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:19.281 22:36:18 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:19.281 22:36:18 -- common/autotest_common.sh@640 -- # local es=0 00:28:19.281 22:36:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:19.281 22:36:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.281 22:36:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.281 22:36:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.281 22:36:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.281 22:36:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.281 22:36:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.281 22:36:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.281 22:36:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:19.281 22:36:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:19.539 [2024-07-14 22:36:18.365425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:19.539 [2024-07-14 22:36:18.365801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138271 ] 00:28:19.539 [2024-07-14 22:36:18.533861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.798 [2024-07-14 22:36:18.688053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.056 [2024-07-14 22:36:18.939430] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:20.056 [2024-07-14 22:36:18.939686] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:20.056 [2024-07-14 22:36:18.939747] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:20.621 [2024-07-14 22:36:19.512702] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:20.879 22:36:19 -- common/autotest_common.sh@643 -- # es=216 00:28:20.879 22:36:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:20.879 22:36:19 -- common/autotest_common.sh@652 -- # es=88 00:28:20.879 22:36:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:20.879 22:36:19 -- common/autotest_common.sh@660 -- # es=1 00:28:20.879 22:36:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:20.879 22:36:19 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:20.879 22:36:19 -- dd/common.sh@98 -- # xtrace_disable 00:28:20.879 22:36:19 -- common/autotest_common.sh@10 -- # set +x 00:28:20.879 22:36:19 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:20.879 [2024-07-14 22:36:19.911023] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:20.879 [2024-07-14 22:36:19.911638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138293 ] 00:28:21.137 [2024-07-14 22:36:20.078341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.396 [2024-07-14 22:36:20.241526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.588  Copying: 512/512 [B] (average 500 kBps) 00:28:22.588 00:28:22.588 ************************************ 00:28:22.588 END TEST dd_flag_nofollow 00:28:22.588 ************************************ 00:28:22.588 22:36:21 -- dd/posix.sh@49 -- # [[ baf65he6mj03jd73ckdxckmpjuy12yi7xy6vqg6nwzp9vn7m35c83zbwh3h7m23onfycl97kwpm53wc81wzg3p6dpour3e0ntw8bs8k16641w3lnkaj61sfuir173cf8rpjkmc72hfa0333785d45nmmne59flz15l95k3zasjoxpl9zeahjgp155qpd9k6vq7q5233mmh5wcjot1nc9uc8gd9yncyifmyga8usgjm2jxmc6oqrl15w33mc2vjgbv1xqkq4as6mqosb2nd20pbjtojfvedjnmzqe7hofrv944z313u9fr0dsp786z2oqtjpabkfqok6302xlcije7m9ub0p2ttiejtpjdufelkb1fy6xu6c5l4t6chxjej6aprbkpmt713ld94hblz0a9xbdmo8h8xcmdub5pex2xhepnwqnmb5u9s1mfg90ec2yp35hgw7c56fddplmju94izsqu4a1wojb2w83l3fhqoc104pvd706os5c8a0b769z == \b\a\f\6\5\h\e\6\m\j\0\3\j\d\7\3\c\k\d\x\c\k\m\p\j\u\y\1\2\y\i\7\x\y\6\v\q\g\6\n\w\z\p\9\v\n\7\m\3\5\c\8\3\z\b\w\h\3\h\7\m\2\3\o\n\f\y\c\l\9\7\k\w\p\m\5\3\w\c\8\1\w\z\g\3\p\6\d\p\o\u\r\3\e\0\n\t\w\8\b\s\8\k\1\6\6\4\1\w\3\l\n\k\a\j\6\1\s\f\u\i\r\1\7\3\c\f\8\r\p\j\k\m\c\7\2\h\f\a\0\3\3\3\7\8\5\d\4\5\n\m\m\n\e\5\9\f\l\z\1\5\l\9\5\k\3\z\a\s\j\o\x\p\l\9\z\e\a\h\j\g\p\1\5\5\q\p\d\9\k\6\v\q\7\q\5\2\3\3\m\m\h\5\w\c\j\o\t\1\n\c\9\u\c\8\g\d\9\y\n\c\y\i\f\m\y\g\a\8\u\s\g\j\m\2\j\x\m\c\6\o\q\r\l\1\5\w\3\3\m\c\2\v\j\g\b\v\1\x\q\k\q\4\a\s\6\m\q\o\s\b\2\n\d\2\0\p\b\j\t\o\j\f\v\e\d\j\n\m\z\q\e\7\h\o\f\r\v\9\4\4\z\3\1\3\u\9\f\r\0\d\s\p\7\8\6\z\2\o\q\t\j\p\a\b\k\f\q\o\k\6\3\0\2\x\l\c\i\j\e\7\m\9\u\b\0\p\2\t\t\i\e\j\t\p\j\d\u\f\e\l\k\b\1\f\y\6\x\u\6\c\5\l\4\t\6\c\h\x\j\e\j\6\a\p\r\b\k\p\m\t\7\1\3\l\d\9\4\h\b\l\z\0\a\9\x\b\d\m\o\8\h\8\x\c\m\d\u\b\5\p\e\x\2\x\h\e\p\n\w\q\n\m\b\5\u\9\s\1\m\f\g\9\0\e\c\2\y\p\3\5\h\g\w\7\c\5\6\f\d\d\p\l\m\j\u\9\4\i\z\s\q\u\4\a\1\w\o\j\b\2\w\8\3\l\3\f\h\q\o\c\1\0\4\p\v\d\7\0\6\o\s\5\c\8\a\0\b\7\6\9\z ]] 00:28:22.588 00:28:22.588 real 0m4.693s 00:28:22.588 user 0m3.680s 00:28:22.588 sys 0m0.673s 00:28:22.588 22:36:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.588 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:28:22.588 22:36:21 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:28:22.588 22:36:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:22.588 22:36:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:22.588 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:28:22.588 ************************************ 00:28:22.588 START TEST dd_flag_noatime 00:28:22.588 ************************************ 00:28:22.588 22:36:21 -- common/autotest_common.sh@1104 -- # noatime 00:28:22.588 22:36:21 -- dd/posix.sh@53 -- # local atime_if 00:28:22.588 22:36:21 -- dd/posix.sh@54 -- # local atime_of 00:28:22.588 22:36:21 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:22.588 22:36:21 -- dd/common.sh@98 -- # xtrace_disable 00:28:22.588 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:28:22.588 22:36:21 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:22.588 22:36:21 -- dd/posix.sh@60 -- # atime_if=1720996580 00:28:22.588 22:36:21 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:22.588 22:36:21 -- dd/posix.sh@61 -- # atime_of=1720996581 00:28:22.588 22:36:21 -- dd/posix.sh@66 -- # sleep 1 00:28:23.524 22:36:22 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:23.524 [2024-07-14 22:36:22.583373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:23.524 [2024-07-14 22:36:22.583569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138379 ] 00:28:23.782 [2024-07-14 22:36:22.752953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.040 [2024-07-14 22:36:22.966259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.245  Copying: 512/512 [B] (average 500 kBps) 00:28:25.245 00:28:25.245 22:36:24 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:25.245 22:36:24 -- dd/posix.sh@69 -- # (( atime_if == 1720996580 )) 00:28:25.245 22:36:24 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:25.245 22:36:24 -- dd/posix.sh@70 -- # (( atime_of == 1720996581 )) 00:28:25.245 22:36:24 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:25.245 [2024-07-14 22:36:24.228616] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:25.245 [2024-07-14 22:36:24.228820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138399 ] 00:28:25.503 [2024-07-14 22:36:24.394571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.503 [2024-07-14 22:36:24.565626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.697  Copying: 512/512 [B] (average 500 kBps) 00:28:26.697 00:28:26.697 22:36:25 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:26.956 22:36:25 -- dd/posix.sh@73 -- # (( atime_if < 1720996584 )) 00:28:26.956 ************************************ 00:28:26.956 END TEST dd_flag_noatime 00:28:26.956 ************************************ 00:28:26.956 00:28:26.956 real 0m4.269s 00:28:26.956 user 0m2.537s 00:28:26.956 sys 0m0.456s 00:28:26.956 22:36:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.956 22:36:25 -- common/autotest_common.sh@10 -- # set +x 00:28:26.956 22:36:25 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:28:26.956 22:36:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:26.956 22:36:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:26.956 22:36:25 -- common/autotest_common.sh@10 -- # set +x 00:28:26.956 ************************************ 00:28:26.956 START TEST dd_flags_misc 00:28:26.956 ************************************ 00:28:26.956 22:36:25 -- common/autotest_common.sh@1104 -- # io 00:28:26.956 22:36:25 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:26.956 22:36:25 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:26.956 22:36:25 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:26.956 22:36:25 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:26.956 22:36:25 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:26.956 22:36:25 -- dd/common.sh@98 -- # xtrace_disable 00:28:26.956 22:36:25 -- common/autotest_common.sh@10 -- # set +x 00:28:26.956 22:36:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:26.956 22:36:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:26.956 [2024-07-14 22:36:25.895963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:26.956 [2024-07-14 22:36:25.896162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138442 ] 00:28:27.215 [2024-07-14 22:36:26.062872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.215 [2024-07-14 22:36:26.220841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.409  Copying: 512/512 [B] (average 500 kBps) 00:28:28.409 00:28:28.409 22:36:27 -- dd/posix.sh@93 -- # [[ 44tv4j38rsngbis0y56ip7286xbjxso9u0u1pxaphmaub6ibk7cvqamrxg7gqk51xrxphido9gj10gf5pndr026eb3e8of1mmnb7wl4gp8oaerp2tnozi2x1857km936kw3vtnqkyd4sdkwib3jitiuk2otc9pd6yzm5p886l40unz2or3iwzn56f15n06wuwcyqf662hjdcf9ktwlg9isswojoyjckkgpeoxezvvjfexgjbnquyyjcabo4m3tr2v515dc4d8oejaj7bgmt9yxlansn1z49gsb3e4h7yeq8iys4ntx7u8rgvfmt9ct2dp4cnl992cfac8f66ffo02m5vhtufj18lklkexztxwm3omm1k48f14sfs8ac3ar0vg0no5vombz3drqbtdei5y9bk6pm5t3a854ymg4p58u8w0wh3ah58fw8m95ynw0mycf60uijvw014tndr6519giok9tnvkttopcgl0jy4unjddtihd3031afxis53d6su == \4\4\t\v\4\j\3\8\r\s\n\g\b\i\s\0\y\5\6\i\p\7\2\8\6\x\b\j\x\s\o\9\u\0\u\1\p\x\a\p\h\m\a\u\b\6\i\b\k\7\c\v\q\a\m\r\x\g\7\g\q\k\5\1\x\r\x\p\h\i\d\o\9\g\j\1\0\g\f\5\p\n\d\r\0\2\6\e\b\3\e\8\o\f\1\m\m\n\b\7\w\l\4\g\p\8\o\a\e\r\p\2\t\n\o\z\i\2\x\1\8\5\7\k\m\9\3\6\k\w\3\v\t\n\q\k\y\d\4\s\d\k\w\i\b\3\j\i\t\i\u\k\2\o\t\c\9\p\d\6\y\z\m\5\p\8\8\6\l\4\0\u\n\z\2\o\r\3\i\w\z\n\5\6\f\1\5\n\0\6\w\u\w\c\y\q\f\6\6\2\h\j\d\c\f\9\k\t\w\l\g\9\i\s\s\w\o\j\o\y\j\c\k\k\g\p\e\o\x\e\z\v\v\j\f\e\x\g\j\b\n\q\u\y\y\j\c\a\b\o\4\m\3\t\r\2\v\5\1\5\d\c\4\d\8\o\e\j\a\j\7\b\g\m\t\9\y\x\l\a\n\s\n\1\z\4\9\g\s\b\3\e\4\h\7\y\e\q\8\i\y\s\4\n\t\x\7\u\8\r\g\v\f\m\t\9\c\t\2\d\p\4\c\n\l\9\9\2\c\f\a\c\8\f\6\6\f\f\o\0\2\m\5\v\h\t\u\f\j\1\8\l\k\l\k\e\x\z\t\x\w\m\3\o\m\m\1\k\4\8\f\1\4\s\f\s\8\a\c\3\a\r\0\v\g\0\n\o\5\v\o\m\b\z\3\d\r\q\b\t\d\e\i\5\y\9\b\k\6\p\m\5\t\3\a\8\5\4\y\m\g\4\p\5\8\u\8\w\0\w\h\3\a\h\5\8\f\w\8\m\9\5\y\n\w\0\m\y\c\f\6\0\u\i\j\v\w\0\1\4\t\n\d\r\6\5\1\9\g\i\o\k\9\t\n\v\k\t\t\o\p\c\g\l\0\j\y\4\u\n\j\d\d\t\i\h\d\3\0\3\1\a\f\x\i\s\5\3\d\6\s\u ]] 00:28:28.409 22:36:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:28.409 22:36:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:28.409 [2024-07-14 22:36:27.468485] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:28.409 [2024-07-14 22:36:27.469249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138467 ] 00:28:28.668 [2024-07-14 22:36:27.635319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.926 [2024-07-14 22:36:27.789322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.115  Copying: 512/512 [B] (average 500 kBps) 00:28:30.115 00:28:30.115 22:36:28 -- dd/posix.sh@93 -- # [[ 44tv4j38rsngbis0y56ip7286xbjxso9u0u1pxaphmaub6ibk7cvqamrxg7gqk51xrxphido9gj10gf5pndr026eb3e8of1mmnb7wl4gp8oaerp2tnozi2x1857km936kw3vtnqkyd4sdkwib3jitiuk2otc9pd6yzm5p886l40unz2or3iwzn56f15n06wuwcyqf662hjdcf9ktwlg9isswojoyjckkgpeoxezvvjfexgjbnquyyjcabo4m3tr2v515dc4d8oejaj7bgmt9yxlansn1z49gsb3e4h7yeq8iys4ntx7u8rgvfmt9ct2dp4cnl992cfac8f66ffo02m5vhtufj18lklkexztxwm3omm1k48f14sfs8ac3ar0vg0no5vombz3drqbtdei5y9bk6pm5t3a854ymg4p58u8w0wh3ah58fw8m95ynw0mycf60uijvw014tndr6519giok9tnvkttopcgl0jy4unjddtihd3031afxis53d6su == \4\4\t\v\4\j\3\8\r\s\n\g\b\i\s\0\y\5\6\i\p\7\2\8\6\x\b\j\x\s\o\9\u\0\u\1\p\x\a\p\h\m\a\u\b\6\i\b\k\7\c\v\q\a\m\r\x\g\7\g\q\k\5\1\x\r\x\p\h\i\d\o\9\g\j\1\0\g\f\5\p\n\d\r\0\2\6\e\b\3\e\8\o\f\1\m\m\n\b\7\w\l\4\g\p\8\o\a\e\r\p\2\t\n\o\z\i\2\x\1\8\5\7\k\m\9\3\6\k\w\3\v\t\n\q\k\y\d\4\s\d\k\w\i\b\3\j\i\t\i\u\k\2\o\t\c\9\p\d\6\y\z\m\5\p\8\8\6\l\4\0\u\n\z\2\o\r\3\i\w\z\n\5\6\f\1\5\n\0\6\w\u\w\c\y\q\f\6\6\2\h\j\d\c\f\9\k\t\w\l\g\9\i\s\s\w\o\j\o\y\j\c\k\k\g\p\e\o\x\e\z\v\v\j\f\e\x\g\j\b\n\q\u\y\y\j\c\a\b\o\4\m\3\t\r\2\v\5\1\5\d\c\4\d\8\o\e\j\a\j\7\b\g\m\t\9\y\x\l\a\n\s\n\1\z\4\9\g\s\b\3\e\4\h\7\y\e\q\8\i\y\s\4\n\t\x\7\u\8\r\g\v\f\m\t\9\c\t\2\d\p\4\c\n\l\9\9\2\c\f\a\c\8\f\6\6\f\f\o\0\2\m\5\v\h\t\u\f\j\1\8\l\k\l\k\e\x\z\t\x\w\m\3\o\m\m\1\k\4\8\f\1\4\s\f\s\8\a\c\3\a\r\0\v\g\0\n\o\5\v\o\m\b\z\3\d\r\q\b\t\d\e\i\5\y\9\b\k\6\p\m\5\t\3\a\8\5\4\y\m\g\4\p\5\8\u\8\w\0\w\h\3\a\h\5\8\f\w\8\m\9\5\y\n\w\0\m\y\c\f\6\0\u\i\j\v\w\0\1\4\t\n\d\r\6\5\1\9\g\i\o\k\9\t\n\v\k\t\t\o\p\c\g\l\0\j\y\4\u\n\j\d\d\t\i\h\d\3\0\3\1\a\f\x\i\s\5\3\d\6\s\u ]] 00:28:30.115 22:36:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:30.115 22:36:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:30.115 [2024-07-14 22:36:29.023900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:30.115 [2024-07-14 22:36:29.024109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138492 ] 00:28:30.382 [2024-07-14 22:36:29.190326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.382 [2024-07-14 22:36:29.347996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.608  Copying: 512/512 [B] (average 166 kBps) 00:28:31.608 00:28:31.608 22:36:30 -- dd/posix.sh@93 -- # [[ 44tv4j38rsngbis0y56ip7286xbjxso9u0u1pxaphmaub6ibk7cvqamrxg7gqk51xrxphido9gj10gf5pndr026eb3e8of1mmnb7wl4gp8oaerp2tnozi2x1857km936kw3vtnqkyd4sdkwib3jitiuk2otc9pd6yzm5p886l40unz2or3iwzn56f15n06wuwcyqf662hjdcf9ktwlg9isswojoyjckkgpeoxezvvjfexgjbnquyyjcabo4m3tr2v515dc4d8oejaj7bgmt9yxlansn1z49gsb3e4h7yeq8iys4ntx7u8rgvfmt9ct2dp4cnl992cfac8f66ffo02m5vhtufj18lklkexztxwm3omm1k48f14sfs8ac3ar0vg0no5vombz3drqbtdei5y9bk6pm5t3a854ymg4p58u8w0wh3ah58fw8m95ynw0mycf60uijvw014tndr6519giok9tnvkttopcgl0jy4unjddtihd3031afxis53d6su == \4\4\t\v\4\j\3\8\r\s\n\g\b\i\s\0\y\5\6\i\p\7\2\8\6\x\b\j\x\s\o\9\u\0\u\1\p\x\a\p\h\m\a\u\b\6\i\b\k\7\c\v\q\a\m\r\x\g\7\g\q\k\5\1\x\r\x\p\h\i\d\o\9\g\j\1\0\g\f\5\p\n\d\r\0\2\6\e\b\3\e\8\o\f\1\m\m\n\b\7\w\l\4\g\p\8\o\a\e\r\p\2\t\n\o\z\i\2\x\1\8\5\7\k\m\9\3\6\k\w\3\v\t\n\q\k\y\d\4\s\d\k\w\i\b\3\j\i\t\i\u\k\2\o\t\c\9\p\d\6\y\z\m\5\p\8\8\6\l\4\0\u\n\z\2\o\r\3\i\w\z\n\5\6\f\1\5\n\0\6\w\u\w\c\y\q\f\6\6\2\h\j\d\c\f\9\k\t\w\l\g\9\i\s\s\w\o\j\o\y\j\c\k\k\g\p\e\o\x\e\z\v\v\j\f\e\x\g\j\b\n\q\u\y\y\j\c\a\b\o\4\m\3\t\r\2\v\5\1\5\d\c\4\d\8\o\e\j\a\j\7\b\g\m\t\9\y\x\l\a\n\s\n\1\z\4\9\g\s\b\3\e\4\h\7\y\e\q\8\i\y\s\4\n\t\x\7\u\8\r\g\v\f\m\t\9\c\t\2\d\p\4\c\n\l\9\9\2\c\f\a\c\8\f\6\6\f\f\o\0\2\m\5\v\h\t\u\f\j\1\8\l\k\l\k\e\x\z\t\x\w\m\3\o\m\m\1\k\4\8\f\1\4\s\f\s\8\a\c\3\a\r\0\v\g\0\n\o\5\v\o\m\b\z\3\d\r\q\b\t\d\e\i\5\y\9\b\k\6\p\m\5\t\3\a\8\5\4\y\m\g\4\p\5\8\u\8\w\0\w\h\3\a\h\5\8\f\w\8\m\9\5\y\n\w\0\m\y\c\f\6\0\u\i\j\v\w\0\1\4\t\n\d\r\6\5\1\9\g\i\o\k\9\t\n\v\k\t\t\o\p\c\g\l\0\j\y\4\u\n\j\d\d\t\i\h\d\3\0\3\1\a\f\x\i\s\5\3\d\6\s\u ]] 00:28:31.608 22:36:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:31.608 22:36:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:31.608 [2024-07-14 22:36:30.602415] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:31.608 [2024-07-14 22:36:30.602609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138509 ] 00:28:31.866 [2024-07-14 22:36:30.770286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.866 [2024-07-14 22:36:30.929895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.364  Copying: 512/512 [B] (average 250 kBps) 00:28:33.364 00:28:33.364 22:36:32 -- dd/posix.sh@93 -- # [[ 44tv4j38rsngbis0y56ip7286xbjxso9u0u1pxaphmaub6ibk7cvqamrxg7gqk51xrxphido9gj10gf5pndr026eb3e8of1mmnb7wl4gp8oaerp2tnozi2x1857km936kw3vtnqkyd4sdkwib3jitiuk2otc9pd6yzm5p886l40unz2or3iwzn56f15n06wuwcyqf662hjdcf9ktwlg9isswojoyjckkgpeoxezvvjfexgjbnquyyjcabo4m3tr2v515dc4d8oejaj7bgmt9yxlansn1z49gsb3e4h7yeq8iys4ntx7u8rgvfmt9ct2dp4cnl992cfac8f66ffo02m5vhtufj18lklkexztxwm3omm1k48f14sfs8ac3ar0vg0no5vombz3drqbtdei5y9bk6pm5t3a854ymg4p58u8w0wh3ah58fw8m95ynw0mycf60uijvw014tndr6519giok9tnvkttopcgl0jy4unjddtihd3031afxis53d6su == \4\4\t\v\4\j\3\8\r\s\n\g\b\i\s\0\y\5\6\i\p\7\2\8\6\x\b\j\x\s\o\9\u\0\u\1\p\x\a\p\h\m\a\u\b\6\i\b\k\7\c\v\q\a\m\r\x\g\7\g\q\k\5\1\x\r\x\p\h\i\d\o\9\g\j\1\0\g\f\5\p\n\d\r\0\2\6\e\b\3\e\8\o\f\1\m\m\n\b\7\w\l\4\g\p\8\o\a\e\r\p\2\t\n\o\z\i\2\x\1\8\5\7\k\m\9\3\6\k\w\3\v\t\n\q\k\y\d\4\s\d\k\w\i\b\3\j\i\t\i\u\k\2\o\t\c\9\p\d\6\y\z\m\5\p\8\8\6\l\4\0\u\n\z\2\o\r\3\i\w\z\n\5\6\f\1\5\n\0\6\w\u\w\c\y\q\f\6\6\2\h\j\d\c\f\9\k\t\w\l\g\9\i\s\s\w\o\j\o\y\j\c\k\k\g\p\e\o\x\e\z\v\v\j\f\e\x\g\j\b\n\q\u\y\y\j\c\a\b\o\4\m\3\t\r\2\v\5\1\5\d\c\4\d\8\o\e\j\a\j\7\b\g\m\t\9\y\x\l\a\n\s\n\1\z\4\9\g\s\b\3\e\4\h\7\y\e\q\8\i\y\s\4\n\t\x\7\u\8\r\g\v\f\m\t\9\c\t\2\d\p\4\c\n\l\9\9\2\c\f\a\c\8\f\6\6\f\f\o\0\2\m\5\v\h\t\u\f\j\1\8\l\k\l\k\e\x\z\t\x\w\m\3\o\m\m\1\k\4\8\f\1\4\s\f\s\8\a\c\3\a\r\0\v\g\0\n\o\5\v\o\m\b\z\3\d\r\q\b\t\d\e\i\5\y\9\b\k\6\p\m\5\t\3\a\8\5\4\y\m\g\4\p\5\8\u\8\w\0\w\h\3\a\h\5\8\f\w\8\m\9\5\y\n\w\0\m\y\c\f\6\0\u\i\j\v\w\0\1\4\t\n\d\r\6\5\1\9\g\i\o\k\9\t\n\v\k\t\t\o\p\c\g\l\0\j\y\4\u\n\j\d\d\t\i\h\d\3\0\3\1\a\f\x\i\s\5\3\d\6\s\u ]] 00:28:33.364 22:36:32 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:33.364 22:36:32 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:33.364 22:36:32 -- dd/common.sh@98 -- # xtrace_disable 00:28:33.364 22:36:32 -- common/autotest_common.sh@10 -- # set +x 00:28:33.364 22:36:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:33.364 22:36:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:33.364 [2024-07-14 22:36:32.212501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:33.364 [2024-07-14 22:36:32.212711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138548 ] 00:28:33.364 [2024-07-14 22:36:32.382485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.622 [2024-07-14 22:36:32.538873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.815  Copying: 512/512 [B] (average 500 kBps) 00:28:34.815 00:28:34.815 22:36:33 -- dd/posix.sh@93 -- # [[ 29710udka6m7cl5u2e8ebujm8rblhm3857ljhijsd126fnig2oxg2er3b7kvfec02jtyqty0qcjvhvdkf4jikigvodvjifcb70j7nl05uwpghtwwskfp343agttxdyzjywcgmggthx6kod0r0bsutzyzbs2bqe1ll1lu80ea4fi4lylynkzw40t6xy6i4ppj2b6zy9os0hyka5fzvv9iygjgb7qe8gzsdz3hbby12gqxuye29l499lwrdts5ru95pn96idaspjbirimoc0zgye9n5qeb5nqmakci4vs6ovpnvbl7s56eo8rmi52idmx8nno4l0lzyb499dvichgv8tffvfhciyout6kkwv3hnymiubzbb8769zs7nl8llo12wf6e876gh6wqrwgar7sv5ld1kannogglyaiinthc12heu6n2cqnrcnkvyadpnlm6c1hnau6wwbpx8907w0fco2uobctdujs1pipal2nvx8ozyetj7jx1hbm7l8mrhx87 == \2\9\7\1\0\u\d\k\a\6\m\7\c\l\5\u\2\e\8\e\b\u\j\m\8\r\b\l\h\m\3\8\5\7\l\j\h\i\j\s\d\1\2\6\f\n\i\g\2\o\x\g\2\e\r\3\b\7\k\v\f\e\c\0\2\j\t\y\q\t\y\0\q\c\j\v\h\v\d\k\f\4\j\i\k\i\g\v\o\d\v\j\i\f\c\b\7\0\j\7\n\l\0\5\u\w\p\g\h\t\w\w\s\k\f\p\3\4\3\a\g\t\t\x\d\y\z\j\y\w\c\g\m\g\g\t\h\x\6\k\o\d\0\r\0\b\s\u\t\z\y\z\b\s\2\b\q\e\1\l\l\1\l\u\8\0\e\a\4\f\i\4\l\y\l\y\n\k\z\w\4\0\t\6\x\y\6\i\4\p\p\j\2\b\6\z\y\9\o\s\0\h\y\k\a\5\f\z\v\v\9\i\y\g\j\g\b\7\q\e\8\g\z\s\d\z\3\h\b\b\y\1\2\g\q\x\u\y\e\2\9\l\4\9\9\l\w\r\d\t\s\5\r\u\9\5\p\n\9\6\i\d\a\s\p\j\b\i\r\i\m\o\c\0\z\g\y\e\9\n\5\q\e\b\5\n\q\m\a\k\c\i\4\v\s\6\o\v\p\n\v\b\l\7\s\5\6\e\o\8\r\m\i\5\2\i\d\m\x\8\n\n\o\4\l\0\l\z\y\b\4\9\9\d\v\i\c\h\g\v\8\t\f\f\v\f\h\c\i\y\o\u\t\6\k\k\w\v\3\h\n\y\m\i\u\b\z\b\b\8\7\6\9\z\s\7\n\l\8\l\l\o\1\2\w\f\6\e\8\7\6\g\h\6\w\q\r\w\g\a\r\7\s\v\5\l\d\1\k\a\n\n\o\g\g\l\y\a\i\i\n\t\h\c\1\2\h\e\u\6\n\2\c\q\n\r\c\n\k\v\y\a\d\p\n\l\m\6\c\1\h\n\a\u\6\w\w\b\p\x\8\9\0\7\w\0\f\c\o\2\u\o\b\c\t\d\u\j\s\1\p\i\p\a\l\2\n\v\x\8\o\z\y\e\t\j\7\j\x\1\h\b\m\7\l\8\m\r\h\x\8\7 ]] 00:28:34.815 22:36:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:34.815 22:36:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:34.815 [2024-07-14 22:36:33.776037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:34.815 [2024-07-14 22:36:33.776201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138577 ] 00:28:35.074 [2024-07-14 22:36:33.926894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.074 [2024-07-14 22:36:34.082819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.266  Copying: 512/512 [B] (average 500 kBps) 00:28:36.266 00:28:36.266 22:36:35 -- dd/posix.sh@93 -- # [[ 29710udka6m7cl5u2e8ebujm8rblhm3857ljhijsd126fnig2oxg2er3b7kvfec02jtyqty0qcjvhvdkf4jikigvodvjifcb70j7nl05uwpghtwwskfp343agttxdyzjywcgmggthx6kod0r0bsutzyzbs2bqe1ll1lu80ea4fi4lylynkzw40t6xy6i4ppj2b6zy9os0hyka5fzvv9iygjgb7qe8gzsdz3hbby12gqxuye29l499lwrdts5ru95pn96idaspjbirimoc0zgye9n5qeb5nqmakci4vs6ovpnvbl7s56eo8rmi52idmx8nno4l0lzyb499dvichgv8tffvfhciyout6kkwv3hnymiubzbb8769zs7nl8llo12wf6e876gh6wqrwgar7sv5ld1kannogglyaiinthc12heu6n2cqnrcnkvyadpnlm6c1hnau6wwbpx8907w0fco2uobctdujs1pipal2nvx8ozyetj7jx1hbm7l8mrhx87 == \2\9\7\1\0\u\d\k\a\6\m\7\c\l\5\u\2\e\8\e\b\u\j\m\8\r\b\l\h\m\3\8\5\7\l\j\h\i\j\s\d\1\2\6\f\n\i\g\2\o\x\g\2\e\r\3\b\7\k\v\f\e\c\0\2\j\t\y\q\t\y\0\q\c\j\v\h\v\d\k\f\4\j\i\k\i\g\v\o\d\v\j\i\f\c\b\7\0\j\7\n\l\0\5\u\w\p\g\h\t\w\w\s\k\f\p\3\4\3\a\g\t\t\x\d\y\z\j\y\w\c\g\m\g\g\t\h\x\6\k\o\d\0\r\0\b\s\u\t\z\y\z\b\s\2\b\q\e\1\l\l\1\l\u\8\0\e\a\4\f\i\4\l\y\l\y\n\k\z\w\4\0\t\6\x\y\6\i\4\p\p\j\2\b\6\z\y\9\o\s\0\h\y\k\a\5\f\z\v\v\9\i\y\g\j\g\b\7\q\e\8\g\z\s\d\z\3\h\b\b\y\1\2\g\q\x\u\y\e\2\9\l\4\9\9\l\w\r\d\t\s\5\r\u\9\5\p\n\9\6\i\d\a\s\p\j\b\i\r\i\m\o\c\0\z\g\y\e\9\n\5\q\e\b\5\n\q\m\a\k\c\i\4\v\s\6\o\v\p\n\v\b\l\7\s\5\6\e\o\8\r\m\i\5\2\i\d\m\x\8\n\n\o\4\l\0\l\z\y\b\4\9\9\d\v\i\c\h\g\v\8\t\f\f\v\f\h\c\i\y\o\u\t\6\k\k\w\v\3\h\n\y\m\i\u\b\z\b\b\8\7\6\9\z\s\7\n\l\8\l\l\o\1\2\w\f\6\e\8\7\6\g\h\6\w\q\r\w\g\a\r\7\s\v\5\l\d\1\k\a\n\n\o\g\g\l\y\a\i\i\n\t\h\c\1\2\h\e\u\6\n\2\c\q\n\r\c\n\k\v\y\a\d\p\n\l\m\6\c\1\h\n\a\u\6\w\w\b\p\x\8\9\0\7\w\0\f\c\o\2\u\o\b\c\t\d\u\j\s\1\p\i\p\a\l\2\n\v\x\8\o\z\y\e\t\j\7\j\x\1\h\b\m\7\l\8\m\r\h\x\8\7 ]] 00:28:36.266 22:36:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:36.266 22:36:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:36.266 [2024-07-14 22:36:35.334348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:36.266 [2024-07-14 22:36:35.334551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138594 ] 00:28:36.524 [2024-07-14 22:36:35.502024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.783 [2024-07-14 22:36:35.656360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.977  Copying: 512/512 [B] (average 166 kBps) 00:28:37.977 00:28:37.977 22:36:36 -- dd/posix.sh@93 -- # [[ 29710udka6m7cl5u2e8ebujm8rblhm3857ljhijsd126fnig2oxg2er3b7kvfec02jtyqty0qcjvhvdkf4jikigvodvjifcb70j7nl05uwpghtwwskfp343agttxdyzjywcgmggthx6kod0r0bsutzyzbs2bqe1ll1lu80ea4fi4lylynkzw40t6xy6i4ppj2b6zy9os0hyka5fzvv9iygjgb7qe8gzsdz3hbby12gqxuye29l499lwrdts5ru95pn96idaspjbirimoc0zgye9n5qeb5nqmakci4vs6ovpnvbl7s56eo8rmi52idmx8nno4l0lzyb499dvichgv8tffvfhciyout6kkwv3hnymiubzbb8769zs7nl8llo12wf6e876gh6wqrwgar7sv5ld1kannogglyaiinthc12heu6n2cqnrcnkvyadpnlm6c1hnau6wwbpx8907w0fco2uobctdujs1pipal2nvx8ozyetj7jx1hbm7l8mrhx87 == \2\9\7\1\0\u\d\k\a\6\m\7\c\l\5\u\2\e\8\e\b\u\j\m\8\r\b\l\h\m\3\8\5\7\l\j\h\i\j\s\d\1\2\6\f\n\i\g\2\o\x\g\2\e\r\3\b\7\k\v\f\e\c\0\2\j\t\y\q\t\y\0\q\c\j\v\h\v\d\k\f\4\j\i\k\i\g\v\o\d\v\j\i\f\c\b\7\0\j\7\n\l\0\5\u\w\p\g\h\t\w\w\s\k\f\p\3\4\3\a\g\t\t\x\d\y\z\j\y\w\c\g\m\g\g\t\h\x\6\k\o\d\0\r\0\b\s\u\t\z\y\z\b\s\2\b\q\e\1\l\l\1\l\u\8\0\e\a\4\f\i\4\l\y\l\y\n\k\z\w\4\0\t\6\x\y\6\i\4\p\p\j\2\b\6\z\y\9\o\s\0\h\y\k\a\5\f\z\v\v\9\i\y\g\j\g\b\7\q\e\8\g\z\s\d\z\3\h\b\b\y\1\2\g\q\x\u\y\e\2\9\l\4\9\9\l\w\r\d\t\s\5\r\u\9\5\p\n\9\6\i\d\a\s\p\j\b\i\r\i\m\o\c\0\z\g\y\e\9\n\5\q\e\b\5\n\q\m\a\k\c\i\4\v\s\6\o\v\p\n\v\b\l\7\s\5\6\e\o\8\r\m\i\5\2\i\d\m\x\8\n\n\o\4\l\0\l\z\y\b\4\9\9\d\v\i\c\h\g\v\8\t\f\f\v\f\h\c\i\y\o\u\t\6\k\k\w\v\3\h\n\y\m\i\u\b\z\b\b\8\7\6\9\z\s\7\n\l\8\l\l\o\1\2\w\f\6\e\8\7\6\g\h\6\w\q\r\w\g\a\r\7\s\v\5\l\d\1\k\a\n\n\o\g\g\l\y\a\i\i\n\t\h\c\1\2\h\e\u\6\n\2\c\q\n\r\c\n\k\v\y\a\d\p\n\l\m\6\c\1\h\n\a\u\6\w\w\b\p\x\8\9\0\7\w\0\f\c\o\2\u\o\b\c\t\d\u\j\s\1\p\i\p\a\l\2\n\v\x\8\o\z\y\e\t\j\7\j\x\1\h\b\m\7\l\8\m\r\h\x\8\7 ]] 00:28:37.977 22:36:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:37.977 22:36:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:37.977 [2024-07-14 22:36:36.903176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:37.977 [2024-07-14 22:36:36.903309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138618 ] 00:28:38.238 [2024-07-14 22:36:37.054570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.238 [2024-07-14 22:36:37.227211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.432  Copying: 512/512 [B] (average 250 kBps) 00:28:39.432 00:28:39.432 ************************************ 00:28:39.432 END TEST dd_flags_misc 00:28:39.432 ************************************ 00:28:39.432 22:36:38 -- dd/posix.sh@93 -- # [[ 29710udka6m7cl5u2e8ebujm8rblhm3857ljhijsd126fnig2oxg2er3b7kvfec02jtyqty0qcjvhvdkf4jikigvodvjifcb70j7nl05uwpghtwwskfp343agttxdyzjywcgmggthx6kod0r0bsutzyzbs2bqe1ll1lu80ea4fi4lylynkzw40t6xy6i4ppj2b6zy9os0hyka5fzvv9iygjgb7qe8gzsdz3hbby12gqxuye29l499lwrdts5ru95pn96idaspjbirimoc0zgye9n5qeb5nqmakci4vs6ovpnvbl7s56eo8rmi52idmx8nno4l0lzyb499dvichgv8tffvfhciyout6kkwv3hnymiubzbb8769zs7nl8llo12wf6e876gh6wqrwgar7sv5ld1kannogglyaiinthc12heu6n2cqnrcnkvyadpnlm6c1hnau6wwbpx8907w0fco2uobctdujs1pipal2nvx8ozyetj7jx1hbm7l8mrhx87 == \2\9\7\1\0\u\d\k\a\6\m\7\c\l\5\u\2\e\8\e\b\u\j\m\8\r\b\l\h\m\3\8\5\7\l\j\h\i\j\s\d\1\2\6\f\n\i\g\2\o\x\g\2\e\r\3\b\7\k\v\f\e\c\0\2\j\t\y\q\t\y\0\q\c\j\v\h\v\d\k\f\4\j\i\k\i\g\v\o\d\v\j\i\f\c\b\7\0\j\7\n\l\0\5\u\w\p\g\h\t\w\w\s\k\f\p\3\4\3\a\g\t\t\x\d\y\z\j\y\w\c\g\m\g\g\t\h\x\6\k\o\d\0\r\0\b\s\u\t\z\y\z\b\s\2\b\q\e\1\l\l\1\l\u\8\0\e\a\4\f\i\4\l\y\l\y\n\k\z\w\4\0\t\6\x\y\6\i\4\p\p\j\2\b\6\z\y\9\o\s\0\h\y\k\a\5\f\z\v\v\9\i\y\g\j\g\b\7\q\e\8\g\z\s\d\z\3\h\b\b\y\1\2\g\q\x\u\y\e\2\9\l\4\9\9\l\w\r\d\t\s\5\r\u\9\5\p\n\9\6\i\d\a\s\p\j\b\i\r\i\m\o\c\0\z\g\y\e\9\n\5\q\e\b\5\n\q\m\a\k\c\i\4\v\s\6\o\v\p\n\v\b\l\7\s\5\6\e\o\8\r\m\i\5\2\i\d\m\x\8\n\n\o\4\l\0\l\z\y\b\4\9\9\d\v\i\c\h\g\v\8\t\f\f\v\f\h\c\i\y\o\u\t\6\k\k\w\v\3\h\n\y\m\i\u\b\z\b\b\8\7\6\9\z\s\7\n\l\8\l\l\o\1\2\w\f\6\e\8\7\6\g\h\6\w\q\r\w\g\a\r\7\s\v\5\l\d\1\k\a\n\n\o\g\g\l\y\a\i\i\n\t\h\c\1\2\h\e\u\6\n\2\c\q\n\r\c\n\k\v\y\a\d\p\n\l\m\6\c\1\h\n\a\u\6\w\w\b\p\x\8\9\0\7\w\0\f\c\o\2\u\o\b\c\t\d\u\j\s\1\p\i\p\a\l\2\n\v\x\8\o\z\y\e\t\j\7\j\x\1\h\b\m\7\l\8\m\r\h\x\8\7 ]] 00:28:39.432 00:28:39.432 real 0m12.582s 00:28:39.432 user 0m9.806s 00:28:39.432 sys 0m1.707s 00:28:39.432 22:36:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.432 22:36:38 -- common/autotest_common.sh@10 -- # set +x 00:28:39.432 22:36:38 -- dd/posix.sh@131 -- # tests_forced_aio 00:28:39.432 22:36:38 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:28:39.432 * Second test run, using AIO 00:28:39.432 22:36:38 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:28:39.432 22:36:38 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:28:39.432 22:36:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:39.432 22:36:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:39.432 22:36:38 -- common/autotest_common.sh@10 -- # set +x 00:28:39.432 ************************************ 00:28:39.432 START TEST dd_flag_append_forced_aio 00:28:39.432 ************************************ 00:28:39.432 22:36:38 -- common/autotest_common.sh@1104 -- # append 00:28:39.432 22:36:38 -- dd/posix.sh@16 -- # local dump0 00:28:39.432 22:36:38 -- dd/posix.sh@17 -- # local dump1 00:28:39.432 22:36:38 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:39.432 22:36:38 -- dd/common.sh@98 -- # xtrace_disable 00:28:39.432 22:36:38 -- common/autotest_common.sh@10 -- # set +x 00:28:39.432 22:36:38 -- dd/posix.sh@19 -- # dump0=kgk5h8joppnul570yromm5l3avwh0pme 00:28:39.432 22:36:38 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:39.432 22:36:38 -- dd/common.sh@98 -- # xtrace_disable 00:28:39.432 22:36:38 -- common/autotest_common.sh@10 -- # set +x 00:28:39.432 22:36:38 -- dd/posix.sh@20 -- # dump1=1xnfys9xo7kftdzszi42zwc6sqz6s91x 00:28:39.432 22:36:38 -- dd/posix.sh@22 -- # printf %s kgk5h8joppnul570yromm5l3avwh0pme 00:28:39.432 22:36:38 -- dd/posix.sh@23 -- # printf %s 1xnfys9xo7kftdzszi42zwc6sqz6s91x 00:28:39.432 22:36:38 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:39.691 [2024-07-14 22:36:38.532784] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:39.691 [2024-07-14 22:36:38.532987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138662 ] 00:28:39.691 [2024-07-14 22:36:38.698748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.949 [2024-07-14 22:36:38.858177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.143  Copying: 32/32 [B] (average 31 kBps) 00:28:41.143 00:28:41.143 22:36:40 -- dd/posix.sh@27 -- # [[ 1xnfys9xo7kftdzszi42zwc6sqz6s91xkgk5h8joppnul570yromm5l3avwh0pme == \1\x\n\f\y\s\9\x\o\7\k\f\t\d\z\s\z\i\4\2\z\w\c\6\s\q\z\6\s\9\1\x\k\g\k\5\h\8\j\o\p\p\n\u\l\5\7\0\y\r\o\m\m\5\l\3\a\v\w\h\0\p\m\e ]] 00:28:41.143 00:28:41.143 real 0m1.573s 00:28:41.143 user 0m1.252s 00:28:41.143 sys 0m0.190s 00:28:41.143 22:36:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.143 22:36:40 -- common/autotest_common.sh@10 -- # set +x 00:28:41.143 ************************************ 00:28:41.143 END TEST dd_flag_append_forced_aio 00:28:41.143 ************************************ 00:28:41.143 22:36:40 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:28:41.143 22:36:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:41.143 22:36:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:41.143 22:36:40 -- common/autotest_common.sh@10 -- # set +x 00:28:41.143 ************************************ 00:28:41.143 START TEST dd_flag_directory_forced_aio 00:28:41.143 ************************************ 00:28:41.143 22:36:40 -- common/autotest_common.sh@1104 -- # directory 00:28:41.143 22:36:40 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:41.143 22:36:40 -- common/autotest_common.sh@640 -- # local es=0 00:28:41.143 22:36:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:41.143 22:36:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:41.143 22:36:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:41.143 22:36:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:41.143 22:36:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:41.143 22:36:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:41.143 22:36:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:41.143 22:36:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:41.143 22:36:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:41.143 22:36:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:41.143 [2024-07-14 22:36:40.156105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:41.143 [2024-07-14 22:36:40.156289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138709 ] 00:28:41.401 [2024-07-14 22:36:40.323008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.659 [2024-07-14 22:36:40.476578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.659 [2024-07-14 22:36:40.720113] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:41.659 [2024-07-14 22:36:40.720194] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:41.659 [2024-07-14 22:36:40.720217] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:42.594 [2024-07-14 22:36:41.317047] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:42.594 22:36:41 -- common/autotest_common.sh@643 -- # es=236 00:28:42.594 22:36:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:42.594 22:36:41 -- common/autotest_common.sh@652 -- # es=108 00:28:42.594 22:36:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:42.594 22:36:41 -- common/autotest_common.sh@660 -- # es=1 00:28:42.594 22:36:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:42.594 22:36:41 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:42.594 22:36:41 -- common/autotest_common.sh@640 -- # local es=0 00:28:42.594 22:36:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:42.594 22:36:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:42.594 22:36:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.594 22:36:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:42.594 22:36:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.594 22:36:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:42.594 22:36:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.594 22:36:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:42.594 22:36:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:42.594 22:36:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:42.852 [2024-07-14 22:36:41.712882] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:42.852 [2024-07-14 22:36:41.713076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138747 ] 00:28:42.853 [2024-07-14 22:36:41.879150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.111 [2024-07-14 22:36:42.038720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.370 [2024-07-14 22:36:42.290179] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:43.370 [2024-07-14 22:36:42.290248] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:43.370 [2024-07-14 22:36:42.290274] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:43.937 [2024-07-14 22:36:42.860086] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:44.195 22:36:43 -- common/autotest_common.sh@643 -- # es=236 00:28:44.195 22:36:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:44.195 22:36:43 -- common/autotest_common.sh@652 -- # es=108 00:28:44.195 22:36:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:44.195 22:36:43 -- common/autotest_common.sh@660 -- # es=1 00:28:44.195 22:36:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:44.195 00:28:44.195 real 0m3.080s 00:28:44.195 user 0m2.478s 00:28:44.195 sys 0m0.401s 00:28:44.195 22:36:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.195 22:36:43 -- common/autotest_common.sh@10 -- # set +x 00:28:44.195 ************************************ 00:28:44.195 END TEST dd_flag_directory_forced_aio 00:28:44.195 ************************************ 00:28:44.195 22:36:43 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:44.195 22:36:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:44.195 22:36:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:44.195 22:36:43 -- common/autotest_common.sh@10 -- # set +x 00:28:44.195 ************************************ 00:28:44.195 START TEST dd_flag_nofollow_forced_aio 00:28:44.195 ************************************ 00:28:44.195 22:36:43 -- common/autotest_common.sh@1104 -- # nofollow 00:28:44.195 22:36:43 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:44.195 22:36:43 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:44.195 22:36:43 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:44.195 22:36:43 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:44.195 22:36:43 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:44.195 22:36:43 -- common/autotest_common.sh@640 -- # local es=0 00:28:44.195 22:36:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:44.195 22:36:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.195 22:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.195 22:36:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.195 22:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.195 22:36:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.195 22:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.195 22:36:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.195 22:36:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:44.195 22:36:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:44.480 [2024-07-14 22:36:43.275524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:44.480 [2024-07-14 22:36:43.275665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138792 ] 00:28:44.480 [2024-07-14 22:36:43.426780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.738 [2024-07-14 22:36:43.592370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.996 [2024-07-14 22:36:43.836814] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:44.996 [2024-07-14 22:36:43.836904] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:44.996 [2024-07-14 22:36:43.836928] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:45.561 [2024-07-14 22:36:44.407655] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:45.820 22:36:44 -- common/autotest_common.sh@643 -- # es=216 00:28:45.821 22:36:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:45.821 22:36:44 -- common/autotest_common.sh@652 -- # es=88 00:28:45.821 22:36:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:45.821 22:36:44 -- common/autotest_common.sh@660 -- # es=1 00:28:45.821 22:36:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:45.821 22:36:44 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:45.821 22:36:44 -- common/autotest_common.sh@640 -- # local es=0 00:28:45.821 22:36:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:45.821 22:36:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 22:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.821 22:36:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 22:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.821 22:36:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 22:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.821 22:36:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 22:36:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:45.821 22:36:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:45.821 [2024-07-14 22:36:44.800623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:45.821 [2024-07-14 22:36:44.800809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138817 ] 00:28:46.079 [2024-07-14 22:36:44.967726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.079 [2024-07-14 22:36:45.120477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.337 [2024-07-14 22:36:45.363209] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:46.337 [2024-07-14 22:36:45.363287] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:46.337 [2024-07-14 22:36:45.363315] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:46.904 [2024-07-14 22:36:45.942188] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:47.471 22:36:46 -- common/autotest_common.sh@643 -- # es=216 00:28:47.471 22:36:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:47.471 22:36:46 -- common/autotest_common.sh@652 -- # es=88 00:28:47.471 22:36:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:47.471 22:36:46 -- common/autotest_common.sh@660 -- # es=1 00:28:47.471 22:36:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:47.471 22:36:46 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:47.471 22:36:46 -- dd/common.sh@98 -- # xtrace_disable 00:28:47.471 22:36:46 -- common/autotest_common.sh@10 -- # set +x 00:28:47.471 22:36:46 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:47.471 [2024-07-14 22:36:46.316127] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:47.471 [2024-07-14 22:36:46.316264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138840 ] 00:28:47.471 [2024-07-14 22:36:46.467235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.729 [2024-07-14 22:36:46.619858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.924  Copying: 512/512 [B] (average 500 kBps) 00:28:48.924 00:28:48.924 22:36:47 -- dd/posix.sh@49 -- # [[ 10i7uaastkbzmlmmdj3asj21ss0ue6s5lo9skjdptkjmu63e4l4izbkjk2aecedh9zuuqlaumc5f2ukr01jgol7oc82y9mtubvly8ufnzroghovlm149t8tju5qk0guwcx40mnr1y95s7n6yt5ojs7og3urcl5mmiugtwt10b09m9vqxjnoxfscw4qinztcmbfa0kfkb05ttw38x1888dnrzjkrm27qp1ejgosrzcf8eeqmmpf73vjx1flm2d68quo12bk3xrhchoskjbli0fr9tfqykstclgmx0qwfuisk49bdv7mipu3qpb3ueytjfek9l8690cpdrx4yohbe1ddv70itqxpq8aymy3sjzgkzlvcn8a982v2plyvpptlgjf49f1vyb91brfcphs50h7pydfg6hi5cu7ujw4eyh8mc9rd73j8l6tj32areeq1vh6lnrirlahminmyaxj2k6w28a82toiub129nnk1cjnke498xqp66ar31tdpxu0yb0 == \1\0\i\7\u\a\a\s\t\k\b\z\m\l\m\m\d\j\3\a\s\j\2\1\s\s\0\u\e\6\s\5\l\o\9\s\k\j\d\p\t\k\j\m\u\6\3\e\4\l\4\i\z\b\k\j\k\2\a\e\c\e\d\h\9\z\u\u\q\l\a\u\m\c\5\f\2\u\k\r\0\1\j\g\o\l\7\o\c\8\2\y\9\m\t\u\b\v\l\y\8\u\f\n\z\r\o\g\h\o\v\l\m\1\4\9\t\8\t\j\u\5\q\k\0\g\u\w\c\x\4\0\m\n\r\1\y\9\5\s\7\n\6\y\t\5\o\j\s\7\o\g\3\u\r\c\l\5\m\m\i\u\g\t\w\t\1\0\b\0\9\m\9\v\q\x\j\n\o\x\f\s\c\w\4\q\i\n\z\t\c\m\b\f\a\0\k\f\k\b\0\5\t\t\w\3\8\x\1\8\8\8\d\n\r\z\j\k\r\m\2\7\q\p\1\e\j\g\o\s\r\z\c\f\8\e\e\q\m\m\p\f\7\3\v\j\x\1\f\l\m\2\d\6\8\q\u\o\1\2\b\k\3\x\r\h\c\h\o\s\k\j\b\l\i\0\f\r\9\t\f\q\y\k\s\t\c\l\g\m\x\0\q\w\f\u\i\s\k\4\9\b\d\v\7\m\i\p\u\3\q\p\b\3\u\e\y\t\j\f\e\k\9\l\8\6\9\0\c\p\d\r\x\4\y\o\h\b\e\1\d\d\v\7\0\i\t\q\x\p\q\8\a\y\m\y\3\s\j\z\g\k\z\l\v\c\n\8\a\9\8\2\v\2\p\l\y\v\p\p\t\l\g\j\f\4\9\f\1\v\y\b\9\1\b\r\f\c\p\h\s\5\0\h\7\p\y\d\f\g\6\h\i\5\c\u\7\u\j\w\4\e\y\h\8\m\c\9\r\d\7\3\j\8\l\6\t\j\3\2\a\r\e\e\q\1\v\h\6\l\n\r\i\r\l\a\h\m\i\n\m\y\a\x\j\2\k\6\w\2\8\a\8\2\t\o\i\u\b\1\2\9\n\n\k\1\c\j\n\k\e\4\9\8\x\q\p\6\6\a\r\3\1\t\d\p\x\u\0\y\b\0 ]] 00:28:48.924 00:28:48.924 real 0m4.587s 00:28:48.924 user 0m3.635s 00:28:48.924 sys 0m0.612s 00:28:48.924 22:36:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.924 22:36:47 -- common/autotest_common.sh@10 -- # set +x 00:28:48.924 ************************************ 00:28:48.924 END TEST dd_flag_nofollow_forced_aio 00:28:48.924 ************************************ 00:28:48.924 22:36:47 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:48.924 22:36:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:48.924 22:36:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:48.924 22:36:47 -- common/autotest_common.sh@10 -- # set +x 00:28:48.924 ************************************ 00:28:48.924 START TEST dd_flag_noatime_forced_aio 00:28:48.924 ************************************ 00:28:48.924 22:36:47 -- common/autotest_common.sh@1104 -- # noatime 00:28:48.924 22:36:47 -- dd/posix.sh@53 -- # local atime_if 00:28:48.924 22:36:47 -- dd/posix.sh@54 -- # local atime_of 00:28:48.924 22:36:47 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:48.924 22:36:47 -- dd/common.sh@98 -- # xtrace_disable 00:28:48.924 22:36:47 -- common/autotest_common.sh@10 -- # set +x 00:28:48.924 22:36:47 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:48.924 22:36:47 -- dd/posix.sh@60 -- # atime_if=1720996606 00:28:48.924 22:36:47 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:48.924 22:36:47 -- dd/posix.sh@61 -- # atime_of=1720996607 00:28:48.924 22:36:47 -- dd/posix.sh@66 -- # sleep 1 00:28:49.860 22:36:48 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:49.860 [2024-07-14 22:36:48.926895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:49.860 [2024-07-14 22:36:48.927043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138897 ] 00:28:50.119 [2024-07-14 22:36:49.079185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.377 [2024-07-14 22:36:49.232121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.571  Copying: 512/512 [B] (average 500 kBps) 00:28:51.571 00:28:51.571 22:36:50 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.571 22:36:50 -- dd/posix.sh@69 -- # (( atime_if == 1720996606 )) 00:28:51.571 22:36:50 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:51.571 22:36:50 -- dd/posix.sh@70 -- # (( atime_of == 1720996607 )) 00:28:51.571 22:36:50 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:51.571 [2024-07-14 22:36:50.463127] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:51.571 [2024-07-14 22:36:50.463323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138923 ] 00:28:51.571 [2024-07-14 22:36:50.630226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.830 [2024-07-14 22:36:50.786676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.025  Copying: 512/512 [B] (average 500 kBps) 00:28:53.025 00:28:53.025 22:36:51 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:53.025 22:36:51 -- dd/posix.sh@73 -- # (( atime_if < 1720996611 )) 00:28:53.025 00:28:53.025 real 0m4.140s 00:28:53.025 user 0m2.452s 00:28:53.025 sys 0m0.431s 00:28:53.025 22:36:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.025 22:36:51 -- common/autotest_common.sh@10 -- # set +x 00:28:53.025 ************************************ 00:28:53.025 END TEST dd_flag_noatime_forced_aio 00:28:53.025 ************************************ 00:28:53.025 22:36:52 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:53.025 22:36:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:53.025 22:36:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:53.025 22:36:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.025 ************************************ 00:28:53.025 START TEST dd_flags_misc_forced_aio 00:28:53.025 ************************************ 00:28:53.025 22:36:52 -- common/autotest_common.sh@1104 -- # io 00:28:53.025 22:36:52 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:53.025 22:36:52 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:53.025 22:36:52 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:53.025 22:36:52 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:53.025 22:36:52 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:53.025 22:36:52 -- dd/common.sh@98 -- # xtrace_disable 00:28:53.025 22:36:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.025 22:36:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:53.025 22:36:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:53.297 [2024-07-14 22:36:52.108426] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:53.297 [2024-07-14 22:36:52.108602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138988 ] 00:28:53.297 [2024-07-14 22:36:52.275515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.587 [2024-07-14 22:36:52.429461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.788  Copying: 512/512 [B] (average 500 kBps) 00:28:54.788 00:28:54.788 22:36:53 -- dd/posix.sh@93 -- # [[ y6k06sx8qvkl17yw3vx3w6t6ptb7f45zbofotsfrufjsqcynzigsxs7kh6es37ghn49cl1z2u4ftr60a9k7fjqm17p8hzko1n5ul4qp2pauxzq399oums32v2ely5cbpfqwxpx8ae4ayrf2pbggfb37jbkz2ilx25te74v90yzcsf2baasq8spsjwgps970rt28l73oujni4fgew6eth9qt494txislw3oo5w2tjob1g2eesnhbspdsllowcj7kgzz2utkfl01f44lp3s4v4uwes938ph9ayzdmt3wyyu6soqkhaxo87gbcjxkf5n2tp2l6x0emnsze8r71mrea4jouwsssnd33hrqiwzrlqmkivkkf6a30tzg3femfifs5kwftrnf3t5khevhbe8u2rsb8jby7gl5t1z6xazl11kikif4124y9dvl3tfexju119jcg7m6y0hiolr983s0qfgkuumla24ydklo0r3dwbqlelokvtkrg6thh4i2y5le0z == \y\6\k\0\6\s\x\8\q\v\k\l\1\7\y\w\3\v\x\3\w\6\t\6\p\t\b\7\f\4\5\z\b\o\f\o\t\s\f\r\u\f\j\s\q\c\y\n\z\i\g\s\x\s\7\k\h\6\e\s\3\7\g\h\n\4\9\c\l\1\z\2\u\4\f\t\r\6\0\a\9\k\7\f\j\q\m\1\7\p\8\h\z\k\o\1\n\5\u\l\4\q\p\2\p\a\u\x\z\q\3\9\9\o\u\m\s\3\2\v\2\e\l\y\5\c\b\p\f\q\w\x\p\x\8\a\e\4\a\y\r\f\2\p\b\g\g\f\b\3\7\j\b\k\z\2\i\l\x\2\5\t\e\7\4\v\9\0\y\z\c\s\f\2\b\a\a\s\q\8\s\p\s\j\w\g\p\s\9\7\0\r\t\2\8\l\7\3\o\u\j\n\i\4\f\g\e\w\6\e\t\h\9\q\t\4\9\4\t\x\i\s\l\w\3\o\o\5\w\2\t\j\o\b\1\g\2\e\e\s\n\h\b\s\p\d\s\l\l\o\w\c\j\7\k\g\z\z\2\u\t\k\f\l\0\1\f\4\4\l\p\3\s\4\v\4\u\w\e\s\9\3\8\p\h\9\a\y\z\d\m\t\3\w\y\y\u\6\s\o\q\k\h\a\x\o\8\7\g\b\c\j\x\k\f\5\n\2\t\p\2\l\6\x\0\e\m\n\s\z\e\8\r\7\1\m\r\e\a\4\j\o\u\w\s\s\s\n\d\3\3\h\r\q\i\w\z\r\l\q\m\k\i\v\k\k\f\6\a\3\0\t\z\g\3\f\e\m\f\i\f\s\5\k\w\f\t\r\n\f\3\t\5\k\h\e\v\h\b\e\8\u\2\r\s\b\8\j\b\y\7\g\l\5\t\1\z\6\x\a\z\l\1\1\k\i\k\i\f\4\1\2\4\y\9\d\v\l\3\t\f\e\x\j\u\1\1\9\j\c\g\7\m\6\y\0\h\i\o\l\r\9\8\3\s\0\q\f\g\k\u\u\m\l\a\2\4\y\d\k\l\o\0\r\3\d\w\b\q\l\e\l\o\k\v\t\k\r\g\6\t\h\h\4\i\2\y\5\l\e\0\z ]] 00:28:54.788 22:36:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:54.788 22:36:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:54.788 [2024-07-14 22:36:53.671759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:54.788 [2024-07-14 22:36:53.671963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139008 ] 00:28:54.788 [2024-07-14 22:36:53.838845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.045 [2024-07-14 22:36:53.991503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.238  Copying: 512/512 [B] (average 500 kBps) 00:28:56.238 00:28:56.238 22:36:55 -- dd/posix.sh@93 -- # [[ y6k06sx8qvkl17yw3vx3w6t6ptb7f45zbofotsfrufjsqcynzigsxs7kh6es37ghn49cl1z2u4ftr60a9k7fjqm17p8hzko1n5ul4qp2pauxzq399oums32v2ely5cbpfqwxpx8ae4ayrf2pbggfb37jbkz2ilx25te74v90yzcsf2baasq8spsjwgps970rt28l73oujni4fgew6eth9qt494txislw3oo5w2tjob1g2eesnhbspdsllowcj7kgzz2utkfl01f44lp3s4v4uwes938ph9ayzdmt3wyyu6soqkhaxo87gbcjxkf5n2tp2l6x0emnsze8r71mrea4jouwsssnd33hrqiwzrlqmkivkkf6a30tzg3femfifs5kwftrnf3t5khevhbe8u2rsb8jby7gl5t1z6xazl11kikif4124y9dvl3tfexju119jcg7m6y0hiolr983s0qfgkuumla24ydklo0r3dwbqlelokvtkrg6thh4i2y5le0z == \y\6\k\0\6\s\x\8\q\v\k\l\1\7\y\w\3\v\x\3\w\6\t\6\p\t\b\7\f\4\5\z\b\o\f\o\t\s\f\r\u\f\j\s\q\c\y\n\z\i\g\s\x\s\7\k\h\6\e\s\3\7\g\h\n\4\9\c\l\1\z\2\u\4\f\t\r\6\0\a\9\k\7\f\j\q\m\1\7\p\8\h\z\k\o\1\n\5\u\l\4\q\p\2\p\a\u\x\z\q\3\9\9\o\u\m\s\3\2\v\2\e\l\y\5\c\b\p\f\q\w\x\p\x\8\a\e\4\a\y\r\f\2\p\b\g\g\f\b\3\7\j\b\k\z\2\i\l\x\2\5\t\e\7\4\v\9\0\y\z\c\s\f\2\b\a\a\s\q\8\s\p\s\j\w\g\p\s\9\7\0\r\t\2\8\l\7\3\o\u\j\n\i\4\f\g\e\w\6\e\t\h\9\q\t\4\9\4\t\x\i\s\l\w\3\o\o\5\w\2\t\j\o\b\1\g\2\e\e\s\n\h\b\s\p\d\s\l\l\o\w\c\j\7\k\g\z\z\2\u\t\k\f\l\0\1\f\4\4\l\p\3\s\4\v\4\u\w\e\s\9\3\8\p\h\9\a\y\z\d\m\t\3\w\y\y\u\6\s\o\q\k\h\a\x\o\8\7\g\b\c\j\x\k\f\5\n\2\t\p\2\l\6\x\0\e\m\n\s\z\e\8\r\7\1\m\r\e\a\4\j\o\u\w\s\s\s\n\d\3\3\h\r\q\i\w\z\r\l\q\m\k\i\v\k\k\f\6\a\3\0\t\z\g\3\f\e\m\f\i\f\s\5\k\w\f\t\r\n\f\3\t\5\k\h\e\v\h\b\e\8\u\2\r\s\b\8\j\b\y\7\g\l\5\t\1\z\6\x\a\z\l\1\1\k\i\k\i\f\4\1\2\4\y\9\d\v\l\3\t\f\e\x\j\u\1\1\9\j\c\g\7\m\6\y\0\h\i\o\l\r\9\8\3\s\0\q\f\g\k\u\u\m\l\a\2\4\y\d\k\l\o\0\r\3\d\w\b\q\l\e\l\o\k\v\t\k\r\g\6\t\h\h\4\i\2\y\5\l\e\0\z ]] 00:28:56.238 22:36:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:56.238 22:36:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:56.238 [2024-07-14 22:36:55.240920] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:56.238 [2024-07-14 22:36:55.241967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139032 ] 00:28:56.496 [2024-07-14 22:36:55.408838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.496 [2024-07-14 22:36:55.564720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.690  Copying: 512/512 [B] (average 250 kBps) 00:28:57.690 00:28:57.690 22:36:56 -- dd/posix.sh@93 -- # [[ y6k06sx8qvkl17yw3vx3w6t6ptb7f45zbofotsfrufjsqcynzigsxs7kh6es37ghn49cl1z2u4ftr60a9k7fjqm17p8hzko1n5ul4qp2pauxzq399oums32v2ely5cbpfqwxpx8ae4ayrf2pbggfb37jbkz2ilx25te74v90yzcsf2baasq8spsjwgps970rt28l73oujni4fgew6eth9qt494txislw3oo5w2tjob1g2eesnhbspdsllowcj7kgzz2utkfl01f44lp3s4v4uwes938ph9ayzdmt3wyyu6soqkhaxo87gbcjxkf5n2tp2l6x0emnsze8r71mrea4jouwsssnd33hrqiwzrlqmkivkkf6a30tzg3femfifs5kwftrnf3t5khevhbe8u2rsb8jby7gl5t1z6xazl11kikif4124y9dvl3tfexju119jcg7m6y0hiolr983s0qfgkuumla24ydklo0r3dwbqlelokvtkrg6thh4i2y5le0z == \y\6\k\0\6\s\x\8\q\v\k\l\1\7\y\w\3\v\x\3\w\6\t\6\p\t\b\7\f\4\5\z\b\o\f\o\t\s\f\r\u\f\j\s\q\c\y\n\z\i\g\s\x\s\7\k\h\6\e\s\3\7\g\h\n\4\9\c\l\1\z\2\u\4\f\t\r\6\0\a\9\k\7\f\j\q\m\1\7\p\8\h\z\k\o\1\n\5\u\l\4\q\p\2\p\a\u\x\z\q\3\9\9\o\u\m\s\3\2\v\2\e\l\y\5\c\b\p\f\q\w\x\p\x\8\a\e\4\a\y\r\f\2\p\b\g\g\f\b\3\7\j\b\k\z\2\i\l\x\2\5\t\e\7\4\v\9\0\y\z\c\s\f\2\b\a\a\s\q\8\s\p\s\j\w\g\p\s\9\7\0\r\t\2\8\l\7\3\o\u\j\n\i\4\f\g\e\w\6\e\t\h\9\q\t\4\9\4\t\x\i\s\l\w\3\o\o\5\w\2\t\j\o\b\1\g\2\e\e\s\n\h\b\s\p\d\s\l\l\o\w\c\j\7\k\g\z\z\2\u\t\k\f\l\0\1\f\4\4\l\p\3\s\4\v\4\u\w\e\s\9\3\8\p\h\9\a\y\z\d\m\t\3\w\y\y\u\6\s\o\q\k\h\a\x\o\8\7\g\b\c\j\x\k\f\5\n\2\t\p\2\l\6\x\0\e\m\n\s\z\e\8\r\7\1\m\r\e\a\4\j\o\u\w\s\s\s\n\d\3\3\h\r\q\i\w\z\r\l\q\m\k\i\v\k\k\f\6\a\3\0\t\z\g\3\f\e\m\f\i\f\s\5\k\w\f\t\r\n\f\3\t\5\k\h\e\v\h\b\e\8\u\2\r\s\b\8\j\b\y\7\g\l\5\t\1\z\6\x\a\z\l\1\1\k\i\k\i\f\4\1\2\4\y\9\d\v\l\3\t\f\e\x\j\u\1\1\9\j\c\g\7\m\6\y\0\h\i\o\l\r\9\8\3\s\0\q\f\g\k\u\u\m\l\a\2\4\y\d\k\l\o\0\r\3\d\w\b\q\l\e\l\o\k\v\t\k\r\g\6\t\h\h\4\i\2\y\5\l\e\0\z ]] 00:28:57.690 22:36:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:57.690 22:36:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:57.950 [2024-07-14 22:36:56.820162] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:57.950 [2024-07-14 22:36:56.820375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139049 ] 00:28:57.950 [2024-07-14 22:36:56.986836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.209 [2024-07-14 22:36:57.160563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.403  Copying: 512/512 [B] (average 83 kBps) 00:28:59.403 00:28:59.403 22:36:58 -- dd/posix.sh@93 -- # [[ y6k06sx8qvkl17yw3vx3w6t6ptb7f45zbofotsfrufjsqcynzigsxs7kh6es37ghn49cl1z2u4ftr60a9k7fjqm17p8hzko1n5ul4qp2pauxzq399oums32v2ely5cbpfqwxpx8ae4ayrf2pbggfb37jbkz2ilx25te74v90yzcsf2baasq8spsjwgps970rt28l73oujni4fgew6eth9qt494txislw3oo5w2tjob1g2eesnhbspdsllowcj7kgzz2utkfl01f44lp3s4v4uwes938ph9ayzdmt3wyyu6soqkhaxo87gbcjxkf5n2tp2l6x0emnsze8r71mrea4jouwsssnd33hrqiwzrlqmkivkkf6a30tzg3femfifs5kwftrnf3t5khevhbe8u2rsb8jby7gl5t1z6xazl11kikif4124y9dvl3tfexju119jcg7m6y0hiolr983s0qfgkuumla24ydklo0r3dwbqlelokvtkrg6thh4i2y5le0z == \y\6\k\0\6\s\x\8\q\v\k\l\1\7\y\w\3\v\x\3\w\6\t\6\p\t\b\7\f\4\5\z\b\o\f\o\t\s\f\r\u\f\j\s\q\c\y\n\z\i\g\s\x\s\7\k\h\6\e\s\3\7\g\h\n\4\9\c\l\1\z\2\u\4\f\t\r\6\0\a\9\k\7\f\j\q\m\1\7\p\8\h\z\k\o\1\n\5\u\l\4\q\p\2\p\a\u\x\z\q\3\9\9\o\u\m\s\3\2\v\2\e\l\y\5\c\b\p\f\q\w\x\p\x\8\a\e\4\a\y\r\f\2\p\b\g\g\f\b\3\7\j\b\k\z\2\i\l\x\2\5\t\e\7\4\v\9\0\y\z\c\s\f\2\b\a\a\s\q\8\s\p\s\j\w\g\p\s\9\7\0\r\t\2\8\l\7\3\o\u\j\n\i\4\f\g\e\w\6\e\t\h\9\q\t\4\9\4\t\x\i\s\l\w\3\o\o\5\w\2\t\j\o\b\1\g\2\e\e\s\n\h\b\s\p\d\s\l\l\o\w\c\j\7\k\g\z\z\2\u\t\k\f\l\0\1\f\4\4\l\p\3\s\4\v\4\u\w\e\s\9\3\8\p\h\9\a\y\z\d\m\t\3\w\y\y\u\6\s\o\q\k\h\a\x\o\8\7\g\b\c\j\x\k\f\5\n\2\t\p\2\l\6\x\0\e\m\n\s\z\e\8\r\7\1\m\r\e\a\4\j\o\u\w\s\s\s\n\d\3\3\h\r\q\i\w\z\r\l\q\m\k\i\v\k\k\f\6\a\3\0\t\z\g\3\f\e\m\f\i\f\s\5\k\w\f\t\r\n\f\3\t\5\k\h\e\v\h\b\e\8\u\2\r\s\b\8\j\b\y\7\g\l\5\t\1\z\6\x\a\z\l\1\1\k\i\k\i\f\4\1\2\4\y\9\d\v\l\3\t\f\e\x\j\u\1\1\9\j\c\g\7\m\6\y\0\h\i\o\l\r\9\8\3\s\0\q\f\g\k\u\u\m\l\a\2\4\y\d\k\l\o\0\r\3\d\w\b\q\l\e\l\o\k\v\t\k\r\g\6\t\h\h\4\i\2\y\5\l\e\0\z ]] 00:28:59.403 22:36:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:59.403 22:36:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:59.403 22:36:58 -- dd/common.sh@98 -- # xtrace_disable 00:28:59.403 22:36:58 -- common/autotest_common.sh@10 -- # set +x 00:28:59.403 22:36:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:59.403 22:36:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:59.403 [2024-07-14 22:36:58.419836] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:59.403 [2024-07-14 22:36:58.420031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139077 ] 00:28:59.662 [2024-07-14 22:36:58.587169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.920 [2024-07-14 22:36:58.739603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.855  Copying: 512/512 [B] (average 500 kBps) 00:29:00.855 00:29:00.855 22:36:59 -- dd/posix.sh@93 -- # [[ i6msa3hql11uoqky4xgdx6llf8gdj42ucl0jjkwl8kubwa4tch9ha3upewoo5tgw6i53ut5fo46htyf9de0afbn5lo7bqk237vu9dgrhuwnuzkk7kr726u66eb8ivp4vh1z8eivhpej18aor18urbrc2r4hdhqezpsu6bmz84nnmm25qs779fiwmg85ag18xb72vv6p0ds93hdfs7tz58wa17epi6o745ugptejnm1kvvgn8wix8ox5iv0pmziwu0a7uk5au3wzdmgdhcd0crgicb4jtv596ozwdt10ccxul98ddj2j96r4mbr7og2n27nmu8uvzxzh7p2zwde5ecpilgq7uc87as8snuify8py17ppd9aryqj8480gsy0hq4aezh7nilmylzv0o7z15m9726kx5o0gf0v6sky3al34qhid3z432nijpa6by05oicl8zagh75b1710x5fv726kqgfhtxbws94ajl5xlgio448afgaicrn1wnk6u3opya == \i\6\m\s\a\3\h\q\l\1\1\u\o\q\k\y\4\x\g\d\x\6\l\l\f\8\g\d\j\4\2\u\c\l\0\j\j\k\w\l\8\k\u\b\w\a\4\t\c\h\9\h\a\3\u\p\e\w\o\o\5\t\g\w\6\i\5\3\u\t\5\f\o\4\6\h\t\y\f\9\d\e\0\a\f\b\n\5\l\o\7\b\q\k\2\3\7\v\u\9\d\g\r\h\u\w\n\u\z\k\k\7\k\r\7\2\6\u\6\6\e\b\8\i\v\p\4\v\h\1\z\8\e\i\v\h\p\e\j\1\8\a\o\r\1\8\u\r\b\r\c\2\r\4\h\d\h\q\e\z\p\s\u\6\b\m\z\8\4\n\n\m\m\2\5\q\s\7\7\9\f\i\w\m\g\8\5\a\g\1\8\x\b\7\2\v\v\6\p\0\d\s\9\3\h\d\f\s\7\t\z\5\8\w\a\1\7\e\p\i\6\o\7\4\5\u\g\p\t\e\j\n\m\1\k\v\v\g\n\8\w\i\x\8\o\x\5\i\v\0\p\m\z\i\w\u\0\a\7\u\k\5\a\u\3\w\z\d\m\g\d\h\c\d\0\c\r\g\i\c\b\4\j\t\v\5\9\6\o\z\w\d\t\1\0\c\c\x\u\l\9\8\d\d\j\2\j\9\6\r\4\m\b\r\7\o\g\2\n\2\7\n\m\u\8\u\v\z\x\z\h\7\p\2\z\w\d\e\5\e\c\p\i\l\g\q\7\u\c\8\7\a\s\8\s\n\u\i\f\y\8\p\y\1\7\p\p\d\9\a\r\y\q\j\8\4\8\0\g\s\y\0\h\q\4\a\e\z\h\7\n\i\l\m\y\l\z\v\0\o\7\z\1\5\m\9\7\2\6\k\x\5\o\0\g\f\0\v\6\s\k\y\3\a\l\3\4\q\h\i\d\3\z\4\3\2\n\i\j\p\a\6\b\y\0\5\o\i\c\l\8\z\a\g\h\7\5\b\1\7\1\0\x\5\f\v\7\2\6\k\q\g\f\h\t\x\b\w\s\9\4\a\j\l\5\x\l\g\i\o\4\4\8\a\f\g\a\i\c\r\n\1\w\n\k\6\u\3\o\p\y\a ]] 00:29:00.855 22:36:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:00.855 22:36:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:29:01.113 [2024-07-14 22:36:59.980580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:01.113 [2024-07-14 22:36:59.980771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139095 ] 00:29:01.113 [2024-07-14 22:37:00.146863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.371 [2024-07-14 22:37:00.300078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.565  Copying: 512/512 [B] (average 500 kBps) 00:29:02.565 00:29:02.565 22:37:01 -- dd/posix.sh@93 -- # [[ i6msa3hql11uoqky4xgdx6llf8gdj42ucl0jjkwl8kubwa4tch9ha3upewoo5tgw6i53ut5fo46htyf9de0afbn5lo7bqk237vu9dgrhuwnuzkk7kr726u66eb8ivp4vh1z8eivhpej18aor18urbrc2r4hdhqezpsu6bmz84nnmm25qs779fiwmg85ag18xb72vv6p0ds93hdfs7tz58wa17epi6o745ugptejnm1kvvgn8wix8ox5iv0pmziwu0a7uk5au3wzdmgdhcd0crgicb4jtv596ozwdt10ccxul98ddj2j96r4mbr7og2n27nmu8uvzxzh7p2zwde5ecpilgq7uc87as8snuify8py17ppd9aryqj8480gsy0hq4aezh7nilmylzv0o7z15m9726kx5o0gf0v6sky3al34qhid3z432nijpa6by05oicl8zagh75b1710x5fv726kqgfhtxbws94ajl5xlgio448afgaicrn1wnk6u3opya == \i\6\m\s\a\3\h\q\l\1\1\u\o\q\k\y\4\x\g\d\x\6\l\l\f\8\g\d\j\4\2\u\c\l\0\j\j\k\w\l\8\k\u\b\w\a\4\t\c\h\9\h\a\3\u\p\e\w\o\o\5\t\g\w\6\i\5\3\u\t\5\f\o\4\6\h\t\y\f\9\d\e\0\a\f\b\n\5\l\o\7\b\q\k\2\3\7\v\u\9\d\g\r\h\u\w\n\u\z\k\k\7\k\r\7\2\6\u\6\6\e\b\8\i\v\p\4\v\h\1\z\8\e\i\v\h\p\e\j\1\8\a\o\r\1\8\u\r\b\r\c\2\r\4\h\d\h\q\e\z\p\s\u\6\b\m\z\8\4\n\n\m\m\2\5\q\s\7\7\9\f\i\w\m\g\8\5\a\g\1\8\x\b\7\2\v\v\6\p\0\d\s\9\3\h\d\f\s\7\t\z\5\8\w\a\1\7\e\p\i\6\o\7\4\5\u\g\p\t\e\j\n\m\1\k\v\v\g\n\8\w\i\x\8\o\x\5\i\v\0\p\m\z\i\w\u\0\a\7\u\k\5\a\u\3\w\z\d\m\g\d\h\c\d\0\c\r\g\i\c\b\4\j\t\v\5\9\6\o\z\w\d\t\1\0\c\c\x\u\l\9\8\d\d\j\2\j\9\6\r\4\m\b\r\7\o\g\2\n\2\7\n\m\u\8\u\v\z\x\z\h\7\p\2\z\w\d\e\5\e\c\p\i\l\g\q\7\u\c\8\7\a\s\8\s\n\u\i\f\y\8\p\y\1\7\p\p\d\9\a\r\y\q\j\8\4\8\0\g\s\y\0\h\q\4\a\e\z\h\7\n\i\l\m\y\l\z\v\0\o\7\z\1\5\m\9\7\2\6\k\x\5\o\0\g\f\0\v\6\s\k\y\3\a\l\3\4\q\h\i\d\3\z\4\3\2\n\i\j\p\a\6\b\y\0\5\o\i\c\l\8\z\a\g\h\7\5\b\1\7\1\0\x\5\f\v\7\2\6\k\q\g\f\h\t\x\b\w\s\9\4\a\j\l\5\x\l\g\i\o\4\4\8\a\f\g\a\i\c\r\n\1\w\n\k\6\u\3\o\p\y\a ]] 00:29:02.565 22:37:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:02.565 22:37:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:29:02.565 [2024-07-14 22:37:01.546666] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:02.565 [2024-07-14 22:37:01.546865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139119 ] 00:29:02.824 [2024-07-14 22:37:01.712189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.824 [2024-07-14 22:37:01.877937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.018  Copying: 512/512 [B] (average 166 kBps) 00:29:04.018 00:29:04.018 22:37:03 -- dd/posix.sh@93 -- # [[ i6msa3hql11uoqky4xgdx6llf8gdj42ucl0jjkwl8kubwa4tch9ha3upewoo5tgw6i53ut5fo46htyf9de0afbn5lo7bqk237vu9dgrhuwnuzkk7kr726u66eb8ivp4vh1z8eivhpej18aor18urbrc2r4hdhqezpsu6bmz84nnmm25qs779fiwmg85ag18xb72vv6p0ds93hdfs7tz58wa17epi6o745ugptejnm1kvvgn8wix8ox5iv0pmziwu0a7uk5au3wzdmgdhcd0crgicb4jtv596ozwdt10ccxul98ddj2j96r4mbr7og2n27nmu8uvzxzh7p2zwde5ecpilgq7uc87as8snuify8py17ppd9aryqj8480gsy0hq4aezh7nilmylzv0o7z15m9726kx5o0gf0v6sky3al34qhid3z432nijpa6by05oicl8zagh75b1710x5fv726kqgfhtxbws94ajl5xlgio448afgaicrn1wnk6u3opya == \i\6\m\s\a\3\h\q\l\1\1\u\o\q\k\y\4\x\g\d\x\6\l\l\f\8\g\d\j\4\2\u\c\l\0\j\j\k\w\l\8\k\u\b\w\a\4\t\c\h\9\h\a\3\u\p\e\w\o\o\5\t\g\w\6\i\5\3\u\t\5\f\o\4\6\h\t\y\f\9\d\e\0\a\f\b\n\5\l\o\7\b\q\k\2\3\7\v\u\9\d\g\r\h\u\w\n\u\z\k\k\7\k\r\7\2\6\u\6\6\e\b\8\i\v\p\4\v\h\1\z\8\e\i\v\h\p\e\j\1\8\a\o\r\1\8\u\r\b\r\c\2\r\4\h\d\h\q\e\z\p\s\u\6\b\m\z\8\4\n\n\m\m\2\5\q\s\7\7\9\f\i\w\m\g\8\5\a\g\1\8\x\b\7\2\v\v\6\p\0\d\s\9\3\h\d\f\s\7\t\z\5\8\w\a\1\7\e\p\i\6\o\7\4\5\u\g\p\t\e\j\n\m\1\k\v\v\g\n\8\w\i\x\8\o\x\5\i\v\0\p\m\z\i\w\u\0\a\7\u\k\5\a\u\3\w\z\d\m\g\d\h\c\d\0\c\r\g\i\c\b\4\j\t\v\5\9\6\o\z\w\d\t\1\0\c\c\x\u\l\9\8\d\d\j\2\j\9\6\r\4\m\b\r\7\o\g\2\n\2\7\n\m\u\8\u\v\z\x\z\h\7\p\2\z\w\d\e\5\e\c\p\i\l\g\q\7\u\c\8\7\a\s\8\s\n\u\i\f\y\8\p\y\1\7\p\p\d\9\a\r\y\q\j\8\4\8\0\g\s\y\0\h\q\4\a\e\z\h\7\n\i\l\m\y\l\z\v\0\o\7\z\1\5\m\9\7\2\6\k\x\5\o\0\g\f\0\v\6\s\k\y\3\a\l\3\4\q\h\i\d\3\z\4\3\2\n\i\j\p\a\6\b\y\0\5\o\i\c\l\8\z\a\g\h\7\5\b\1\7\1\0\x\5\f\v\7\2\6\k\q\g\f\h\t\x\b\w\s\9\4\a\j\l\5\x\l\g\i\o\4\4\8\a\f\g\a\i\c\r\n\1\w\n\k\6\u\3\o\p\y\a ]] 00:29:04.018 22:37:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:04.018 22:37:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:29:04.277 [2024-07-14 22:37:03.138351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:04.277 [2024-07-14 22:37:03.138552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139154 ] 00:29:04.277 [2024-07-14 22:37:03.308113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.535 [2024-07-14 22:37:03.471250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.726  Copying: 512/512 [B] (average 250 kBps) 00:29:05.726 00:29:05.726 ************************************ 00:29:05.726 END TEST dd_flags_misc_forced_aio 00:29:05.726 ************************************ 00:29:05.726 22:37:04 -- dd/posix.sh@93 -- # [[ i6msa3hql11uoqky4xgdx6llf8gdj42ucl0jjkwl8kubwa4tch9ha3upewoo5tgw6i53ut5fo46htyf9de0afbn5lo7bqk237vu9dgrhuwnuzkk7kr726u66eb8ivp4vh1z8eivhpej18aor18urbrc2r4hdhqezpsu6bmz84nnmm25qs779fiwmg85ag18xb72vv6p0ds93hdfs7tz58wa17epi6o745ugptejnm1kvvgn8wix8ox5iv0pmziwu0a7uk5au3wzdmgdhcd0crgicb4jtv596ozwdt10ccxul98ddj2j96r4mbr7og2n27nmu8uvzxzh7p2zwde5ecpilgq7uc87as8snuify8py17ppd9aryqj8480gsy0hq4aezh7nilmylzv0o7z15m9726kx5o0gf0v6sky3al34qhid3z432nijpa6by05oicl8zagh75b1710x5fv726kqgfhtxbws94ajl5xlgio448afgaicrn1wnk6u3opya == \i\6\m\s\a\3\h\q\l\1\1\u\o\q\k\y\4\x\g\d\x\6\l\l\f\8\g\d\j\4\2\u\c\l\0\j\j\k\w\l\8\k\u\b\w\a\4\t\c\h\9\h\a\3\u\p\e\w\o\o\5\t\g\w\6\i\5\3\u\t\5\f\o\4\6\h\t\y\f\9\d\e\0\a\f\b\n\5\l\o\7\b\q\k\2\3\7\v\u\9\d\g\r\h\u\w\n\u\z\k\k\7\k\r\7\2\6\u\6\6\e\b\8\i\v\p\4\v\h\1\z\8\e\i\v\h\p\e\j\1\8\a\o\r\1\8\u\r\b\r\c\2\r\4\h\d\h\q\e\z\p\s\u\6\b\m\z\8\4\n\n\m\m\2\5\q\s\7\7\9\f\i\w\m\g\8\5\a\g\1\8\x\b\7\2\v\v\6\p\0\d\s\9\3\h\d\f\s\7\t\z\5\8\w\a\1\7\e\p\i\6\o\7\4\5\u\g\p\t\e\j\n\m\1\k\v\v\g\n\8\w\i\x\8\o\x\5\i\v\0\p\m\z\i\w\u\0\a\7\u\k\5\a\u\3\w\z\d\m\g\d\h\c\d\0\c\r\g\i\c\b\4\j\t\v\5\9\6\o\z\w\d\t\1\0\c\c\x\u\l\9\8\d\d\j\2\j\9\6\r\4\m\b\r\7\o\g\2\n\2\7\n\m\u\8\u\v\z\x\z\h\7\p\2\z\w\d\e\5\e\c\p\i\l\g\q\7\u\c\8\7\a\s\8\s\n\u\i\f\y\8\p\y\1\7\p\p\d\9\a\r\y\q\j\8\4\8\0\g\s\y\0\h\q\4\a\e\z\h\7\n\i\l\m\y\l\z\v\0\o\7\z\1\5\m\9\7\2\6\k\x\5\o\0\g\f\0\v\6\s\k\y\3\a\l\3\4\q\h\i\d\3\z\4\3\2\n\i\j\p\a\6\b\y\0\5\o\i\c\l\8\z\a\g\h\7\5\b\1\7\1\0\x\5\f\v\7\2\6\k\q\g\f\h\t\x\b\w\s\9\4\a\j\l\5\x\l\g\i\o\4\4\8\a\f\g\a\i\c\r\n\1\w\n\k\6\u\3\o\p\y\a ]] 00:29:05.726 00:29:05.726 real 0m12.607s 00:29:05.726 user 0m9.806s 00:29:05.726 sys 0m1.735s 00:29:05.726 22:37:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.726 22:37:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.726 22:37:04 -- dd/posix.sh@1 -- # cleanup 00:29:05.726 22:37:04 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:05.726 22:37:04 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:05.726 00:29:05.726 real 0m52.845s 00:29:05.726 user 0m39.738s 00:29:05.726 sys 0m7.028s 00:29:05.726 22:37:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.726 ************************************ 00:29:05.726 END TEST spdk_dd_posix 00:29:05.726 22:37:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.726 ************************************ 00:29:05.726 22:37:04 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:29:05.726 22:37:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.726 22:37:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.726 22:37:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.726 ************************************ 00:29:05.726 START TEST spdk_dd_malloc 00:29:05.726 ************************************ 00:29:05.726 22:37:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:29:05.984 * Looking for test storage... 00:29:05.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:05.984 22:37:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:05.984 22:37:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.984 22:37:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.984 22:37:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.984 22:37:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.984 22:37:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.985 22:37:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.985 22:37:04 -- paths/export.sh@5 -- # export PATH 00:29:05.985 22:37:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.985 22:37:04 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:29:05.985 22:37:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.985 22:37:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.985 22:37:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.985 ************************************ 00:29:05.985 START TEST dd_malloc_copy 00:29:05.985 ************************************ 00:29:05.985 22:37:04 -- common/autotest_common.sh@1104 -- # malloc_copy 00:29:05.985 22:37:04 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:29:05.985 22:37:04 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:29:05.985 22:37:04 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:29:05.985 22:37:04 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:29:05.985 22:37:04 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:29:05.985 22:37:04 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:29:05.985 22:37:04 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:29:05.985 22:37:04 -- dd/malloc.sh@28 -- # gen_conf 00:29:05.985 22:37:04 -- dd/common.sh@31 -- # xtrace_disable 00:29:05.985 22:37:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.985 { 00:29:05.985 "subsystems": [ 00:29:05.985 { 00:29:05.985 "subsystem": "bdev", 00:29:05.985 "config": [ 00:29:05.985 { 00:29:05.985 "params": { 00:29:05.985 "num_blocks": 1048576, 00:29:05.985 "block_size": 512, 00:29:05.985 "name": "malloc0" 00:29:05.985 }, 00:29:05.985 "method": "bdev_malloc_create" 00:29:05.985 }, 00:29:05.985 { 00:29:05.985 "params": { 00:29:05.985 "num_blocks": 1048576, 00:29:05.985 "block_size": 512, 00:29:05.985 "name": "malloc1" 00:29:05.985 }, 00:29:05.985 "method": "bdev_malloc_create" 00:29:05.985 }, 00:29:05.985 { 00:29:05.985 "method": "bdev_wait_for_examine" 00:29:05.985 } 00:29:05.985 ] 00:29:05.985 } 00:29:05.985 ] 00:29:05.985 } 00:29:05.985 [2024-07-14 22:37:04.918617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:05.985 [2024-07-14 22:37:04.918845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139248 ] 00:29:06.243 [2024-07-14 22:37:05.090360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.243 [2024-07-14 22:37:05.251073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.904  Copying: 227/512 [MB] (227 MBps) Copying: 456/512 [MB] (228 MBps) Copying: 512/512 [MB] (average 228 MBps) 00:29:12.904 00:29:12.904 22:37:11 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:29:12.904 22:37:11 -- dd/malloc.sh@33 -- # gen_conf 00:29:12.904 22:37:11 -- dd/common.sh@31 -- # xtrace_disable 00:29:12.904 22:37:11 -- common/autotest_common.sh@10 -- # set +x 00:29:12.904 [2024-07-14 22:37:11.586070] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:12.904 [2024-07-14 22:37:11.586396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139336 ] 00:29:12.904 { 00:29:12.904 "subsystems": [ 00:29:12.904 { 00:29:12.904 "subsystem": "bdev", 00:29:12.904 "config": [ 00:29:12.904 { 00:29:12.904 "params": { 00:29:12.904 "num_blocks": 1048576, 00:29:12.904 "block_size": 512, 00:29:12.904 "name": "malloc0" 00:29:12.904 }, 00:29:12.904 "method": "bdev_malloc_create" 00:29:12.904 }, 00:29:12.904 { 00:29:12.904 "params": { 00:29:12.904 "num_blocks": 1048576, 00:29:12.904 "block_size": 512, 00:29:12.904 "name": "malloc1" 00:29:12.904 }, 00:29:12.904 "method": "bdev_malloc_create" 00:29:12.904 }, 00:29:12.904 { 00:29:12.904 "method": "bdev_wait_for_examine" 00:29:12.904 } 00:29:12.904 ] 00:29:12.904 } 00:29:12.904 ] 00:29:12.904 } 00:29:12.904 [2024-07-14 22:37:11.735547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.904 [2024-07-14 22:37:11.890329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.552  Copying: 227/512 [MB] (227 MBps) Copying: 454/512 [MB] (227 MBps) Copying: 512/512 [MB] (average 227 MBps) 00:29:19.552 00:29:19.552 ************************************ 00:29:19.552 END TEST dd_malloc_copy 00:29:19.552 ************************************ 00:29:19.552 00:29:19.552 real 0m13.337s 00:29:19.552 user 0m12.047s 00:29:19.552 sys 0m1.158s 00:29:19.552 22:37:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.552 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:29:19.552 00:29:19.552 real 0m13.474s 00:29:19.552 user 0m12.121s 00:29:19.552 sys 0m1.223s 00:29:19.552 22:37:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.552 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:29:19.552 ************************************ 00:29:19.552 END TEST spdk_dd_malloc 00:29:19.552 ************************************ 00:29:19.552 22:37:18 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:29:19.552 22:37:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:19.552 22:37:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:19.552 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:29:19.552 ************************************ 00:29:19.552 START TEST spdk_dd_bdev_to_bdev 00:29:19.552 ************************************ 00:29:19.552 22:37:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:29:19.552 * Looking for test storage... 00:29:19.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:19.552 22:37:18 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:19.552 22:37:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.552 22:37:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.552 22:37:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.552 22:37:18 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:19.552 22:37:18 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:19.552 22:37:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:19.552 22:37:18 -- paths/export.sh@5 -- # export PATH 00:29:19.552 22:37:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:29:19.552 22:37:18 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:29:19.552 [2024-07-14 22:37:18.427497] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:19.552 [2024-07-14 22:37:18.427708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139494 ] 00:29:19.552 [2024-07-14 22:37:18.598396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.810 [2024-07-14 22:37:18.755792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.314  Copying: 256/256 [MB] (average 1514 MBps) 00:29:21.314 00:29:21.314 22:37:20 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:21.314 22:37:20 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:21.314 22:37:20 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:29:21.314 22:37:20 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:29:21.314 22:37:20 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:21.314 22:37:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:21.314 22:37:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.314 22:37:20 -- common/autotest_common.sh@10 -- # set +x 00:29:21.314 ************************************ 00:29:21.314 START TEST dd_inflate_file 00:29:21.314 ************************************ 00:29:21.314 22:37:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:21.314 [2024-07-14 22:37:20.186453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:21.314 [2024-07-14 22:37:20.186652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139523 ] 00:29:21.314 [2024-07-14 22:37:20.354479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.573 [2024-07-14 22:37:20.515266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.764  Copying: 64/64 [MB] (average 1523 MBps) 00:29:22.764 00:29:22.764 00:29:22.764 real 0m1.619s 00:29:22.764 user 0m1.211s 00:29:22.764 sys 0m0.276s 00:29:22.764 ************************************ 00:29:22.764 END TEST dd_inflate_file 00:29:22.764 ************************************ 00:29:22.764 22:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.764 22:37:21 -- common/autotest_common.sh@10 -- # set +x 00:29:22.764 22:37:21 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:29:22.764 22:37:21 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:29:22.764 22:37:21 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:22.764 22:37:21 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:29:22.764 22:37:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:22.764 22:37:21 -- dd/common.sh@31 -- # xtrace_disable 00:29:22.764 22:37:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.764 22:37:21 -- common/autotest_common.sh@10 -- # set +x 00:29:22.764 22:37:21 -- common/autotest_common.sh@10 -- # set +x 00:29:22.764 ************************************ 00:29:22.764 START TEST dd_copy_to_out_bdev 00:29:22.764 ************************************ 00:29:22.764 22:37:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:23.023 { 00:29:23.023 "subsystems": [ 00:29:23.023 { 00:29:23.023 "subsystem": "bdev", 00:29:23.023 "config": [ 00:29:23.023 { 00:29:23.023 "params": { 00:29:23.023 "block_size": 4096, 00:29:23.023 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:23.023 "name": "aio1" 00:29:23.023 }, 00:29:23.023 "method": "bdev_aio_create" 00:29:23.023 }, 00:29:23.023 { 00:29:23.023 "params": { 00:29:23.023 "trtype": "pcie", 00:29:23.023 "traddr": "0000:00:06.0", 00:29:23.023 "name": "Nvme0" 00:29:23.023 }, 00:29:23.023 "method": "bdev_nvme_attach_controller" 00:29:23.023 }, 00:29:23.023 { 00:29:23.023 "method": "bdev_wait_for_examine" 00:29:23.023 } 00:29:23.023 ] 00:29:23.023 } 00:29:23.023 ] 00:29:23.023 } 00:29:23.023 [2024-07-14 22:37:21.868775] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:23.023 [2024-07-14 22:37:21.868988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139577 ] 00:29:23.023 [2024-07-14 22:37:22.036357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.281 [2024-07-14 22:37:22.198529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.597  Copying: 40/64 [MB] (40 MBps) Copying: 64/64 [MB] (average 40 MBps) 00:29:26.597 00:29:26.597 00:29:26.597 real 0m3.518s 00:29:26.597 user 0m3.078s 00:29:26.597 sys 0m0.316s 00:29:26.597 ************************************ 00:29:26.597 END TEST dd_copy_to_out_bdev 00:29:26.597 ************************************ 00:29:26.597 22:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.597 22:37:25 -- common/autotest_common.sh@10 -- # set +x 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:29:26.597 22:37:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:26.597 22:37:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.597 22:37:25 -- common/autotest_common.sh@10 -- # set +x 00:29:26.597 ************************************ 00:29:26.597 START TEST dd_offset_magic 00:29:26.597 ************************************ 00:29:26.597 22:37:25 -- common/autotest_common.sh@1104 -- # offset_magic 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:29:26.597 22:37:25 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:26.597 22:37:25 -- dd/common.sh@31 -- # xtrace_disable 00:29:26.597 22:37:25 -- common/autotest_common.sh@10 -- # set +x 00:29:26.597 [2024-07-14 22:37:25.442917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:26.597 [2024-07-14 22:37:25.443100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139658 ] 00:29:26.597 { 00:29:26.597 "subsystems": [ 00:29:26.597 { 00:29:26.597 "subsystem": "bdev", 00:29:26.597 "config": [ 00:29:26.597 { 00:29:26.597 "params": { 00:29:26.597 "block_size": 4096, 00:29:26.597 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:26.597 "name": "aio1" 00:29:26.597 }, 00:29:26.597 "method": "bdev_aio_create" 00:29:26.597 }, 00:29:26.597 { 00:29:26.597 "params": { 00:29:26.597 "trtype": "pcie", 00:29:26.597 "traddr": "0000:00:06.0", 00:29:26.597 "name": "Nvme0" 00:29:26.597 }, 00:29:26.597 "method": "bdev_nvme_attach_controller" 00:29:26.597 }, 00:29:26.597 { 00:29:26.597 "method": "bdev_wait_for_examine" 00:29:26.597 } 00:29:26.597 ] 00:29:26.597 } 00:29:26.597 ] 00:29:26.597 } 00:29:26.597 [2024-07-14 22:37:25.612113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.856 [2024-07-14 22:37:25.822887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.725  Copying: 65/65 [MB] (average 218 MBps) 00:29:28.725 00:29:28.725 22:37:27 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:29:28.725 22:37:27 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:28.725 22:37:27 -- dd/common.sh@31 -- # xtrace_disable 00:29:28.725 22:37:27 -- common/autotest_common.sh@10 -- # set +x 00:29:28.725 [2024-07-14 22:37:27.659315] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:28.725 [2024-07-14 22:37:27.659505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139695 ] 00:29:28.725 { 00:29:28.725 "subsystems": [ 00:29:28.725 { 00:29:28.725 "subsystem": "bdev", 00:29:28.725 "config": [ 00:29:28.725 { 00:29:28.725 "params": { 00:29:28.725 "block_size": 4096, 00:29:28.725 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:28.725 "name": "aio1" 00:29:28.725 }, 00:29:28.725 "method": "bdev_aio_create" 00:29:28.725 }, 00:29:28.725 { 00:29:28.725 "params": { 00:29:28.725 "trtype": "pcie", 00:29:28.725 "traddr": "0000:00:06.0", 00:29:28.725 "name": "Nvme0" 00:29:28.725 }, 00:29:28.725 "method": "bdev_nvme_attach_controller" 00:29:28.725 }, 00:29:28.725 { 00:29:28.725 "method": "bdev_wait_for_examine" 00:29:28.725 } 00:29:28.725 ] 00:29:28.725 } 00:29:28.725 ] 00:29:28.725 } 00:29:28.983 [2024-07-14 22:37:27.827686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.983 [2024-07-14 22:37:27.990048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.485  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:30.485 00:29:30.485 22:37:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:30.485 22:37:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:30.485 22:37:29 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:30.485 22:37:29 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:29:30.485 22:37:29 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:30.485 22:37:29 -- dd/common.sh@31 -- # xtrace_disable 00:29:30.485 22:37:29 -- common/autotest_common.sh@10 -- # set +x 00:29:30.485 [2024-07-14 22:37:29.385486] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:30.485 [2024-07-14 22:37:29.385637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139725 ] 00:29:30.485 { 00:29:30.485 "subsystems": [ 00:29:30.485 { 00:29:30.485 "subsystem": "bdev", 00:29:30.485 "config": [ 00:29:30.485 { 00:29:30.485 "params": { 00:29:30.485 "block_size": 4096, 00:29:30.485 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:30.485 "name": "aio1" 00:29:30.485 }, 00:29:30.485 "method": "bdev_aio_create" 00:29:30.485 }, 00:29:30.485 { 00:29:30.485 "params": { 00:29:30.485 "trtype": "pcie", 00:29:30.485 "traddr": "0000:00:06.0", 00:29:30.485 "name": "Nvme0" 00:29:30.485 }, 00:29:30.485 "method": "bdev_nvme_attach_controller" 00:29:30.485 }, 00:29:30.485 { 00:29:30.485 "method": "bdev_wait_for_examine" 00:29:30.485 } 00:29:30.485 ] 00:29:30.485 } 00:29:30.485 ] 00:29:30.485 } 00:29:30.485 [2024-07-14 22:37:29.544212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.744 [2024-07-14 22:37:29.707013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.279  Copying: 65/65 [MB] (average 274 MBps) 00:29:32.279 00:29:32.279 22:37:31 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:29:32.279 22:37:31 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:32.279 22:37:31 -- dd/common.sh@31 -- # xtrace_disable 00:29:32.279 22:37:31 -- common/autotest_common.sh@10 -- # set +x 00:29:32.279 [2024-07-14 22:37:31.231093] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:32.279 [2024-07-14 22:37:31.231278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139754 ] 00:29:32.279 { 00:29:32.279 "subsystems": [ 00:29:32.279 { 00:29:32.280 "subsystem": "bdev", 00:29:32.280 "config": [ 00:29:32.280 { 00:29:32.280 "params": { 00:29:32.280 "block_size": 4096, 00:29:32.280 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:32.280 "name": "aio1" 00:29:32.280 }, 00:29:32.280 "method": "bdev_aio_create" 00:29:32.280 }, 00:29:32.280 { 00:29:32.280 "params": { 00:29:32.280 "trtype": "pcie", 00:29:32.280 "traddr": "0000:00:06.0", 00:29:32.280 "name": "Nvme0" 00:29:32.280 }, 00:29:32.280 "method": "bdev_nvme_attach_controller" 00:29:32.280 }, 00:29:32.280 { 00:29:32.280 "method": "bdev_wait_for_examine" 00:29:32.280 } 00:29:32.280 ] 00:29:32.280 } 00:29:32.280 ] 00:29:32.280 } 00:29:32.539 [2024-07-14 22:37:31.389164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.539 [2024-07-14 22:37:31.546973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.040  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:34.040 00:29:34.040 22:37:32 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:34.040 22:37:32 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:34.040 00:29:34.040 real 0m7.522s 00:29:34.040 user 0m5.692s 00:29:34.040 sys 0m1.030s 00:29:34.040 22:37:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.040 22:37:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.040 ************************************ 00:29:34.040 END TEST dd_offset_magic 00:29:34.040 ************************************ 00:29:34.040 22:37:32 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:29:34.040 22:37:32 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:29:34.040 22:37:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:34.040 22:37:32 -- dd/common.sh@11 -- # local nvme_ref= 00:29:34.040 22:37:32 -- dd/common.sh@12 -- # local size=4194330 00:29:34.040 22:37:32 -- dd/common.sh@14 -- # local bs=1048576 00:29:34.040 22:37:32 -- dd/common.sh@15 -- # local count=5 00:29:34.040 22:37:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:29:34.040 22:37:32 -- dd/common.sh@18 -- # gen_conf 00:29:34.040 22:37:32 -- dd/common.sh@31 -- # xtrace_disable 00:29:34.040 22:37:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.040 [2024-07-14 22:37:33.009305] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:34.040 [2024-07-14 22:37:33.009508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139823 ] 00:29:34.040 { 00:29:34.040 "subsystems": [ 00:29:34.040 { 00:29:34.040 "subsystem": "bdev", 00:29:34.040 "config": [ 00:29:34.040 { 00:29:34.040 "params": { 00:29:34.040 "block_size": 4096, 00:29:34.040 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:34.040 "name": "aio1" 00:29:34.040 }, 00:29:34.040 "method": "bdev_aio_create" 00:29:34.040 }, 00:29:34.040 { 00:29:34.040 "params": { 00:29:34.040 "trtype": "pcie", 00:29:34.040 "traddr": "0000:00:06.0", 00:29:34.040 "name": "Nvme0" 00:29:34.040 }, 00:29:34.040 "method": "bdev_nvme_attach_controller" 00:29:34.040 }, 00:29:34.040 { 00:29:34.040 "method": "bdev_wait_for_examine" 00:29:34.040 } 00:29:34.040 ] 00:29:34.040 } 00:29:34.040 ] 00:29:34.040 } 00:29:34.298 [2024-07-14 22:37:33.180032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.298 [2024-07-14 22:37:33.347524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.798  Copying: 5120/5120 [kB] (average 1250 MBps) 00:29:35.798 00:29:35.798 22:37:34 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:29:35.798 22:37:34 -- dd/common.sh@10 -- # local bdev=aio1 00:29:35.798 22:37:34 -- dd/common.sh@11 -- # local nvme_ref= 00:29:35.798 22:37:34 -- dd/common.sh@12 -- # local size=4194330 00:29:35.798 22:37:34 -- dd/common.sh@14 -- # local bs=1048576 00:29:35.798 22:37:34 -- dd/common.sh@15 -- # local count=5 00:29:35.798 22:37:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:29:35.798 22:37:34 -- dd/common.sh@18 -- # gen_conf 00:29:35.798 22:37:34 -- dd/common.sh@31 -- # xtrace_disable 00:29:35.798 22:37:34 -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 [2024-07-14 22:37:34.626395] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:35.798 [2024-07-14 22:37:34.626552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139845 ] 00:29:35.798 { 00:29:35.798 "subsystems": [ 00:29:35.798 { 00:29:35.798 "subsystem": "bdev", 00:29:35.798 "config": [ 00:29:35.798 { 00:29:35.798 "params": { 00:29:35.798 "block_size": 4096, 00:29:35.798 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:35.798 "name": "aio1" 00:29:35.798 }, 00:29:35.798 "method": "bdev_aio_create" 00:29:35.798 }, 00:29:35.798 { 00:29:35.798 "params": { 00:29:35.798 "trtype": "pcie", 00:29:35.798 "traddr": "0000:00:06.0", 00:29:35.798 "name": "Nvme0" 00:29:35.798 }, 00:29:35.798 "method": "bdev_nvme_attach_controller" 00:29:35.798 }, 00:29:35.798 { 00:29:35.798 "method": "bdev_wait_for_examine" 00:29:35.798 } 00:29:35.798 ] 00:29:35.798 } 00:29:35.798 ] 00:29:35.798 } 00:29:35.798 [2024-07-14 22:37:34.780575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.056 [2024-07-14 22:37:34.942545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.246  Copying: 5120/5120 [kB] (average 277 MBps) 00:29:37.246 00:29:37.504 22:37:36 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:37.504 00:29:37.504 real 0m18.095s 00:29:37.504 user 0m13.964s 00:29:37.504 sys 0m2.676s 00:29:37.504 ************************************ 00:29:37.504 END TEST spdk_dd_bdev_to_bdev 00:29:37.504 ************************************ 00:29:37.504 22:37:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.504 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:29:37.504 22:37:36 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:29:37.504 22:37:36 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:37.504 22:37:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:37.504 22:37:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:37.504 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:29:37.504 ************************************ 00:29:37.504 START TEST spdk_dd_sparse 00:29:37.504 ************************************ 00:29:37.504 22:37:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:37.504 * Looking for test storage... 00:29:37.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:37.504 22:37:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:37.504 22:37:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.504 22:37:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.504 22:37:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.504 22:37:36 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:37.504 22:37:36 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:37.504 22:37:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:37.504 22:37:36 -- paths/export.sh@5 -- # export PATH 00:29:37.504 22:37:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:37.504 22:37:36 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:29:37.504 22:37:36 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:29:37.504 22:37:36 -- dd/sparse.sh@110 -- # file1=file_zero1 00:29:37.504 22:37:36 -- dd/sparse.sh@111 -- # file2=file_zero2 00:29:37.504 22:37:36 -- dd/sparse.sh@112 -- # file3=file_zero3 00:29:37.504 22:37:36 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:29:37.504 22:37:36 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:29:37.504 22:37:36 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:29:37.504 22:37:36 -- dd/sparse.sh@118 -- # prepare 00:29:37.504 22:37:36 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:29:37.504 22:37:36 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:29:37.504 1+0 records in 00:29:37.504 1+0 records out 00:29:37.504 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00682543 s, 615 MB/s 00:29:37.504 22:37:36 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:29:37.504 1+0 records in 00:29:37.504 1+0 records out 00:29:37.504 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103636 s, 405 MB/s 00:29:37.504 22:37:36 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:29:37.504 1+0 records in 00:29:37.504 1+0 records out 00:29:37.504 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0057907 s, 724 MB/s 00:29:37.504 22:37:36 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:29:37.504 22:37:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:37.504 22:37:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:37.504 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:29:37.504 ************************************ 00:29:37.504 START TEST dd_sparse_file_to_file 00:29:37.504 ************************************ 00:29:37.504 22:37:36 -- common/autotest_common.sh@1104 -- # file_to_file 00:29:37.504 22:37:36 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:29:37.504 22:37:36 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:29:37.504 22:37:36 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:37.504 22:37:36 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:29:37.504 22:37:36 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:29:37.504 22:37:36 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:29:37.504 22:37:36 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:29:37.504 22:37:36 -- dd/sparse.sh@41 -- # gen_conf 00:29:37.504 22:37:36 -- dd/common.sh@31 -- # xtrace_disable 00:29:37.504 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:29:37.762 [2024-07-14 22:37:36.612531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:37.762 [2024-07-14 22:37:36.612711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139934 ] 00:29:37.762 { 00:29:37.762 "subsystems": [ 00:29:37.762 { 00:29:37.762 "subsystem": "bdev", 00:29:37.762 "config": [ 00:29:37.762 { 00:29:37.762 "params": { 00:29:37.762 "block_size": 4096, 00:29:37.762 "filename": "dd_sparse_aio_disk", 00:29:37.762 "name": "dd_aio" 00:29:37.762 }, 00:29:37.762 "method": "bdev_aio_create" 00:29:37.762 }, 00:29:37.762 { 00:29:37.762 "params": { 00:29:37.762 "lvs_name": "dd_lvstore", 00:29:37.762 "bdev_name": "dd_aio" 00:29:37.762 }, 00:29:37.762 "method": "bdev_lvol_create_lvstore" 00:29:37.762 }, 00:29:37.762 { 00:29:37.762 "method": "bdev_wait_for_examine" 00:29:37.762 } 00:29:37.762 ] 00:29:37.762 } 00:29:37.762 ] 00:29:37.762 } 00:29:37.762 [2024-07-14 22:37:36.758071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.021 [2024-07-14 22:37:36.919089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.214  Copying: 12/36 [MB] (average 1200 MBps) 00:29:39.214 00:29:39.214 22:37:38 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:29:39.214 22:37:38 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:29:39.214 22:37:38 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:29:39.473 22:37:38 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:29:39.473 22:37:38 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:39.473 22:37:38 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:29:39.473 22:37:38 -- dd/sparse.sh@52 -- # stat1_b=24576 00:29:39.473 22:37:38 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:29:39.473 22:37:38 -- dd/sparse.sh@53 -- # stat2_b=24576 00:29:39.473 22:37:38 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:39.473 00:29:39.473 real 0m1.739s 00:29:39.473 user 0m1.382s 00:29:39.473 sys 0m0.243s 00:29:39.473 22:37:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.473 22:37:38 -- common/autotest_common.sh@10 -- # set +x 00:29:39.473 ************************************ 00:29:39.473 END TEST dd_sparse_file_to_file 00:29:39.473 ************************************ 00:29:39.473 22:37:38 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:29:39.473 22:37:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:39.473 22:37:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:39.473 22:37:38 -- common/autotest_common.sh@10 -- # set +x 00:29:39.473 ************************************ 00:29:39.473 START TEST dd_sparse_file_to_bdev 00:29:39.473 ************************************ 00:29:39.473 22:37:38 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:29:39.473 22:37:38 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:39.473 22:37:38 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:29:39.473 22:37:38 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:29:39.473 22:37:38 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:29:39.473 22:37:38 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:29:39.473 22:37:38 -- dd/sparse.sh@73 -- # gen_conf 00:29:39.473 22:37:38 -- dd/common.sh@31 -- # xtrace_disable 00:29:39.473 22:37:38 -- common/autotest_common.sh@10 -- # set +x 00:29:39.473 [2024-07-14 22:37:38.418033] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:39.473 [2024-07-14 22:37:38.418207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139993 ] 00:29:39.473 { 00:29:39.473 "subsystems": [ 00:29:39.473 { 00:29:39.473 "subsystem": "bdev", 00:29:39.473 "config": [ 00:29:39.473 { 00:29:39.473 "params": { 00:29:39.473 "block_size": 4096, 00:29:39.473 "filename": "dd_sparse_aio_disk", 00:29:39.473 "name": "dd_aio" 00:29:39.473 }, 00:29:39.473 "method": "bdev_aio_create" 00:29:39.473 }, 00:29:39.473 { 00:29:39.473 "params": { 00:29:39.473 "lvs_name": "dd_lvstore", 00:29:39.473 "thin_provision": true, 00:29:39.473 "lvol_name": "dd_lvol", 00:29:39.473 "size": 37748736 00:29:39.473 }, 00:29:39.473 "method": "bdev_lvol_create" 00:29:39.473 }, 00:29:39.473 { 00:29:39.473 "method": "bdev_wait_for_examine" 00:29:39.473 } 00:29:39.473 ] 00:29:39.473 } 00:29:39.473 ] 00:29:39.473 } 00:29:39.732 [2024-07-14 22:37:38.584096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.732 [2024-07-14 22:37:38.744872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.991 [2024-07-14 22:37:39.006040] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:29:39.991  Copying: 12/36 [MB] (average 545 MBps)[2024-07-14 22:37:39.061264] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:29:41.366 00:29:41.366 00:29:41.366 00:29:41.366 real 0m1.760s 00:29:41.366 user 0m1.407s 00:29:41.366 sys 0m0.255s 00:29:41.366 ************************************ 00:29:41.366 22:37:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.366 22:37:40 -- common/autotest_common.sh@10 -- # set +x 00:29:41.366 END TEST dd_sparse_file_to_bdev 00:29:41.366 ************************************ 00:29:41.366 22:37:40 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:29:41.366 22:37:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:41.366 22:37:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:41.366 22:37:40 -- common/autotest_common.sh@10 -- # set +x 00:29:41.366 ************************************ 00:29:41.366 START TEST dd_sparse_bdev_to_file 00:29:41.366 ************************************ 00:29:41.366 22:37:40 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:29:41.366 22:37:40 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:29:41.366 22:37:40 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:29:41.366 22:37:40 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:41.366 22:37:40 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:29:41.366 22:37:40 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:29:41.366 22:37:40 -- dd/sparse.sh@91 -- # gen_conf 00:29:41.366 22:37:40 -- dd/common.sh@31 -- # xtrace_disable 00:29:41.366 22:37:40 -- common/autotest_common.sh@10 -- # set +x 00:29:41.366 [2024-07-14 22:37:40.234111] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:41.366 [2024-07-14 22:37:40.234326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140045 ] 00:29:41.366 { 00:29:41.366 "subsystems": [ 00:29:41.366 { 00:29:41.366 "subsystem": "bdev", 00:29:41.366 "config": [ 00:29:41.366 { 00:29:41.366 "params": { 00:29:41.366 "block_size": 4096, 00:29:41.366 "filename": "dd_sparse_aio_disk", 00:29:41.366 "name": "dd_aio" 00:29:41.366 }, 00:29:41.366 "method": "bdev_aio_create" 00:29:41.366 }, 00:29:41.366 { 00:29:41.366 "method": "bdev_wait_for_examine" 00:29:41.366 } 00:29:41.366 ] 00:29:41.366 } 00:29:41.366 ] 00:29:41.366 } 00:29:41.366 [2024-07-14 22:37:40.404270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.625 [2024-07-14 22:37:40.627293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.259  Copying: 12/36 [MB] (average 1090 MBps) 00:29:43.259 00:29:43.259 22:37:42 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:43.259 22:37:42 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:43.259 22:37:42 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:43.259 22:37:42 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:43.259 22:37:42 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:43.259 22:37:42 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:43.259 22:37:42 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:43.259 22:37:42 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:43.259 22:37:42 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:43.259 22:37:42 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:43.259 00:29:43.259 real 0m1.875s 00:29:43.259 user 0m1.492s 00:29:43.259 sys 0m0.291s 00:29:43.259 22:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.259 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.259 ************************************ 00:29:43.259 END TEST dd_sparse_bdev_to_file 00:29:43.259 ************************************ 00:29:43.259 22:37:42 -- dd/sparse.sh@1 -- # cleanup 00:29:43.259 22:37:42 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:43.259 22:37:42 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:43.259 22:37:42 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:43.259 22:37:42 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:43.259 00:29:43.259 real 0m5.676s 00:29:43.259 user 0m4.436s 00:29:43.259 sys 0m0.923s 00:29:43.259 22:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.259 ************************************ 00:29:43.259 END TEST spdk_dd_sparse 00:29:43.259 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.259 ************************************ 00:29:43.259 22:37:42 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:43.259 22:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.259 22:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.259 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.259 ************************************ 00:29:43.259 START TEST spdk_dd_negative 00:29:43.259 ************************************ 00:29:43.259 22:37:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:43.259 * Looking for test storage... 00:29:43.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:43.259 22:37:42 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:43.259 22:37:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.259 22:37:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.259 22:37:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.259 22:37:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:43.259 22:37:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:43.259 22:37:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:43.259 22:37:42 -- paths/export.sh@5 -- # export PATH 00:29:43.259 22:37:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:43.259 22:37:42 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:43.259 22:37:42 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:43.259 22:37:42 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:43.259 22:37:42 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:43.259 22:37:42 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:43.259 22:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.259 22:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.259 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.259 ************************************ 00:29:43.259 START TEST dd_invalid_arguments 00:29:43.259 ************************************ 00:29:43.259 22:37:42 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:29:43.259 22:37:42 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:43.259 22:37:42 -- common/autotest_common.sh@640 -- # local es=0 00:29:43.259 22:37:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:43.259 22:37:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.259 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.259 22:37:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.259 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.259 22:37:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.259 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.259 22:37:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.259 22:37:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:43.259 22:37:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:43.259 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:43.259 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:43.259 options: 00:29:43.259 -c, --config JSON config file (default none) 00:29:43.259 --json JSON config file (default none) 00:29:43.259 --json-ignore-init-errors 00:29:43.259 don't exit on invalid config entry 00:29:43.259 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:43.259 -g, --single-file-segments 00:29:43.259 force creating just one hugetlbfs file 00:29:43.259 -h, --help show this usage 00:29:43.259 -i, --shm-id shared memory ID (optional) 00:29:43.259 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:43.259 --lcores lcore to CPU mapping list. The list is in the format: 00:29:43.259 [<,lcores[@CPUs]>...] 00:29:43.259 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:43.259 Within the group, '-' is used for range separator, 00:29:43.259 ',' is used for single number separator. 00:29:43.259 '( )' can be omitted for single element group, 00:29:43.259 '@' can be omitted if cpus and lcores have the same value 00:29:43.259 -n, --mem-channels channel number of memory channels used for DPDK 00:29:43.259 -p, --main-core main (primary) core for DPDK 00:29:43.259 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:43.259 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:43.259 --disable-cpumask-locks Disable CPU core lock files. 00:29:43.259 --silence-noticelog disable notice level logging to stderr 00:29:43.259 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:43.259 -u, --no-pci disable PCI access 00:29:43.259 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:43.259 --max-delay maximum reactor delay (in microseconds) 00:29:43.259 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:43.259 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:43.259 -R, --huge-unlink unlink huge files after initialization 00:29:43.259 -v, --version print SPDK version 00:29:43.259 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:43.259 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:43.259 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:43.259 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:43.259 Tracepoints vary in size and can use more than one trace entry. 00:29:43.259 --rpcs-allowed comma-separated list of permitted RPCS 00:29:43.259 --env-context Opaque context for use of the env implementation 00:29:43.259 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:43.259 --no-huge run without using hugepages 00:29:43.260 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:43.260 -e, --tpoint-group [:] 00:29:43.260 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:43.260 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:43.260 Groups and [2024-07-14 22:37:42.328450] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:43.518 masks can be combined (e.g. thread,bdev:0x1). 00:29:43.518 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:43.518 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:43.518 [--------- DD Options ---------] 00:29:43.518 --if Input file. Must specify either --if or --ib. 00:29:43.518 --ib Input bdev. Must specifier either --if or --ib 00:29:43.518 --of Output file. Must specify either --of or --ob. 00:29:43.518 --ob Output bdev. Must specify either --of or --ob. 00:29:43.518 --iflag Input file flags. 00:29:43.518 --oflag Output file flags. 00:29:43.518 --bs I/O unit size (default: 4096) 00:29:43.518 --qd Queue depth (default: 2) 00:29:43.518 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:43.518 --skip Skip this many I/O units at start of input. (default: 0) 00:29:43.518 --seek Skip this many I/O units at start of output. (default: 0) 00:29:43.518 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:43.518 --sparse Enable hole skipping in input target 00:29:43.518 Available iflag and oflag values: 00:29:43.518 append - append mode 00:29:43.518 direct - use direct I/O for data 00:29:43.518 directory - fail unless a directory 00:29:43.518 dsync - use synchronized I/O for data 00:29:43.518 noatime - do not update access time 00:29:43.518 noctty - do not assign controlling terminal from file 00:29:43.518 nofollow - do not follow symlinks 00:29:43.518 nonblock - use non-blocking I/O 00:29:43.518 sync - use synchronized I/O for data and metadata 00:29:43.518 22:37:42 -- common/autotest_common.sh@643 -- # es=2 00:29:43.518 22:37:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:43.518 22:37:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:43.518 22:37:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:43.518 00:29:43.518 real 0m0.132s 00:29:43.518 user 0m0.083s 00:29:43.518 sys 0m0.050s 00:29:43.518 22:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.518 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.518 ************************************ 00:29:43.518 END TEST dd_invalid_arguments 00:29:43.518 ************************************ 00:29:43.518 22:37:42 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:43.518 22:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.518 22:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.518 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.518 ************************************ 00:29:43.518 START TEST dd_double_input 00:29:43.518 ************************************ 00:29:43.518 22:37:42 -- common/autotest_common.sh@1104 -- # double_input 00:29:43.518 22:37:42 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:43.518 22:37:42 -- common/autotest_common.sh@640 -- # local es=0 00:29:43.518 22:37:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:43.518 22:37:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.518 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.518 22:37:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.518 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.518 22:37:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.518 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.518 22:37:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.518 22:37:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:43.518 22:37:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:43.518 [2024-07-14 22:37:42.513234] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:43.518 22:37:42 -- common/autotest_common.sh@643 -- # es=22 00:29:43.518 22:37:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:43.518 22:37:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:43.518 22:37:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:43.518 00:29:43.518 real 0m0.134s 00:29:43.518 user 0m0.049s 00:29:43.518 sys 0m0.086s 00:29:43.518 ************************************ 00:29:43.518 END TEST dd_double_input 00:29:43.518 22:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.518 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.518 ************************************ 00:29:43.777 22:37:42 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:43.777 22:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.777 22:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.777 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.777 ************************************ 00:29:43.777 START TEST dd_double_output 00:29:43.777 ************************************ 00:29:43.777 22:37:42 -- common/autotest_common.sh@1104 -- # double_output 00:29:43.777 22:37:42 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:43.777 22:37:42 -- common/autotest_common.sh@640 -- # local es=0 00:29:43.777 22:37:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:43.777 22:37:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.777 22:37:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.777 22:37:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:43.777 22:37:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:43.777 [2024-07-14 22:37:42.684282] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:43.777 22:37:42 -- common/autotest_common.sh@643 -- # es=22 00:29:43.777 22:37:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:43.777 22:37:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:43.777 22:37:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:43.777 00:29:43.777 real 0m0.111s 00:29:43.777 user 0m0.050s 00:29:43.777 sys 0m0.061s 00:29:43.777 22:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.777 ************************************ 00:29:43.777 END TEST dd_double_output 00:29:43.777 ************************************ 00:29:43.777 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.777 22:37:42 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:43.777 22:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.777 22:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.777 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:43.777 ************************************ 00:29:43.777 START TEST dd_no_input 00:29:43.777 ************************************ 00:29:43.777 22:37:42 -- common/autotest_common.sh@1104 -- # no_input 00:29:43.777 22:37:42 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:43.777 22:37:42 -- common/autotest_common.sh@640 -- # local es=0 00:29:43.777 22:37:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:43.777 22:37:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.777 22:37:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:43.777 22:37:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.777 22:37:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:43.777 22:37:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:43.777 [2024-07-14 22:37:42.836110] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:44.036 22:37:42 -- common/autotest_common.sh@643 -- # es=22 00:29:44.036 22:37:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.036 22:37:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.036 22:37:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.036 ************************************ 00:29:44.036 END TEST dd_no_input 00:29:44.036 00:29:44.036 real 0m0.093s 00:29:44.036 user 0m0.058s 00:29:44.036 sys 0m0.036s 00:29:44.036 22:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.036 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:44.036 ************************************ 00:29:44.036 22:37:42 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:44.036 22:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:44.036 22:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:44.036 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:29:44.036 ************************************ 00:29:44.036 START TEST dd_no_output 00:29:44.036 ************************************ 00:29:44.036 22:37:42 -- common/autotest_common.sh@1104 -- # no_output 00:29:44.036 22:37:42 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:44.036 22:37:42 -- common/autotest_common.sh@640 -- # local es=0 00:29:44.036 22:37:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:44.036 22:37:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.036 22:37:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.036 22:37:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.036 22:37:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:44.036 22:37:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:44.036 [2024-07-14 22:37:42.994766] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:44.036 22:37:43 -- common/autotest_common.sh@643 -- # es=22 00:29:44.036 22:37:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.036 22:37:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.036 22:37:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.036 00:29:44.036 real 0m0.116s 00:29:44.036 user 0m0.044s 00:29:44.036 sys 0m0.063s 00:29:44.036 22:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.036 ************************************ 00:29:44.036 END TEST dd_no_output 00:29:44.036 ************************************ 00:29:44.036 22:37:43 -- common/autotest_common.sh@10 -- # set +x 00:29:44.036 22:37:43 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:44.036 22:37:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:44.036 22:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:44.036 22:37:43 -- common/autotest_common.sh@10 -- # set +x 00:29:44.036 ************************************ 00:29:44.036 START TEST dd_wrong_blocksize 00:29:44.036 ************************************ 00:29:44.036 22:37:43 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:29:44.036 22:37:43 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:44.036 22:37:43 -- common/autotest_common.sh@640 -- # local es=0 00:29:44.036 22:37:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:44.036 22:37:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.036 22:37:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.036 22:37:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.036 22:37:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.036 22:37:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:44.036 22:37:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:44.294 [2024-07-14 22:37:43.152869] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:44.295 22:37:43 -- common/autotest_common.sh@643 -- # es=22 00:29:44.295 22:37:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.295 22:37:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.295 22:37:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.295 00:29:44.295 real 0m0.107s 00:29:44.295 user 0m0.050s 00:29:44.295 sys 0m0.057s 00:29:44.295 22:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.295 22:37:43 -- common/autotest_common.sh@10 -- # set +x 00:29:44.295 ************************************ 00:29:44.295 END TEST dd_wrong_blocksize 00:29:44.295 ************************************ 00:29:44.295 22:37:43 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:44.295 22:37:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:44.295 22:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:44.295 22:37:43 -- common/autotest_common.sh@10 -- # set +x 00:29:44.295 ************************************ 00:29:44.295 START TEST dd_smaller_blocksize 00:29:44.295 ************************************ 00:29:44.295 22:37:43 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:29:44.295 22:37:43 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:44.295 22:37:43 -- common/autotest_common.sh@640 -- # local es=0 00:29:44.295 22:37:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:44.295 22:37:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.295 22:37:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.295 22:37:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.295 22:37:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.295 22:37:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.295 22:37:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.295 22:37:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.295 22:37:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:44.295 22:37:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:44.295 [2024-07-14 22:37:43.332899] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:44.295 [2024-07-14 22:37:43.333168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140330 ] 00:29:44.553 [2024-07-14 22:37:43.504244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.811 [2024-07-14 22:37:43.745699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.377 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:45.377 [2024-07-14 22:37:44.320249] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:45.377 [2024-07-14 22:37:44.320381] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:45.943 [2024-07-14 22:37:44.960943] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:46.513 22:37:45 -- common/autotest_common.sh@643 -- # es=244 00:29:46.513 22:37:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:46.513 22:37:45 -- common/autotest_common.sh@652 -- # es=116 00:29:46.513 ************************************ 00:29:46.513 END TEST dd_smaller_blocksize 00:29:46.513 ************************************ 00:29:46.514 22:37:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@660 -- # es=1 00:29:46.514 22:37:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:46.514 00:29:46.514 real 0m2.069s 00:29:46.514 user 0m1.455s 00:29:46.514 sys 0m0.513s 00:29:46.514 22:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.514 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.514 22:37:45 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:46.514 22:37:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:46.514 22:37:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.514 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.514 ************************************ 00:29:46.514 START TEST dd_invalid_count 00:29:46.514 ************************************ 00:29:46.514 22:37:45 -- common/autotest_common.sh@1104 -- # invalid_count 00:29:46.514 22:37:45 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:46.514 22:37:45 -- common/autotest_common.sh@640 -- # local es=0 00:29:46.514 22:37:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:46.514 22:37:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:46.514 22:37:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:46.514 [2024-07-14 22:37:45.439099] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:46.514 22:37:45 -- common/autotest_common.sh@643 -- # es=22 00:29:46.514 22:37:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:46.514 ************************************ 00:29:46.514 END TEST dd_invalid_count 00:29:46.514 22:37:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:46.514 22:37:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:46.514 00:29:46.514 real 0m0.101s 00:29:46.514 user 0m0.066s 00:29:46.514 sys 0m0.035s 00:29:46.514 22:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.514 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.514 ************************************ 00:29:46.514 22:37:45 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:46.514 22:37:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:46.514 22:37:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.514 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.514 ************************************ 00:29:46.514 START TEST dd_invalid_oflag 00:29:46.514 ************************************ 00:29:46.514 22:37:45 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:29:46.514 22:37:45 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:46.514 22:37:45 -- common/autotest_common.sh@640 -- # local es=0 00:29:46.514 22:37:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:46.514 22:37:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.514 22:37:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.514 22:37:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:46.514 22:37:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:46.778 [2024-07-14 22:37:45.586636] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:46.778 22:37:45 -- common/autotest_common.sh@643 -- # es=22 00:29:46.778 22:37:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:46.778 22:37:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:46.778 22:37:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:46.778 00:29:46.778 real 0m0.095s 00:29:46.778 user 0m0.049s 00:29:46.778 sys 0m0.047s 00:29:46.778 ************************************ 00:29:46.778 END TEST dd_invalid_oflag 00:29:46.778 ************************************ 00:29:46.778 22:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.778 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.778 22:37:45 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:46.778 22:37:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:46.778 22:37:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.778 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.778 ************************************ 00:29:46.778 START TEST dd_invalid_iflag 00:29:46.778 ************************************ 00:29:46.778 22:37:45 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:29:46.778 22:37:45 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:46.778 22:37:45 -- common/autotest_common.sh@640 -- # local es=0 00:29:46.778 22:37:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:46.778 22:37:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.778 22:37:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.778 22:37:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:46.778 22:37:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:46.778 [2024-07-14 22:37:45.745128] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:46.778 22:37:45 -- common/autotest_common.sh@643 -- # es=22 00:29:46.778 22:37:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:46.778 22:37:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:46.778 22:37:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:46.778 ************************************ 00:29:46.778 END TEST dd_invalid_iflag 00:29:46.778 ************************************ 00:29:46.778 00:29:46.778 real 0m0.106s 00:29:46.778 user 0m0.051s 00:29:46.778 sys 0m0.055s 00:29:46.778 22:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.778 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.778 22:37:45 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:46.778 22:37:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:46.778 22:37:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.778 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:46.778 ************************************ 00:29:46.778 START TEST dd_unknown_flag 00:29:46.778 ************************************ 00:29:46.778 22:37:45 -- common/autotest_common.sh@1104 -- # unknown_flag 00:29:46.778 22:37:45 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:46.778 22:37:45 -- common/autotest_common.sh@640 -- # local es=0 00:29:46.778 22:37:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:46.778 22:37:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.778 22:37:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:46.778 22:37:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.778 22:37:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:46.778 22:37:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:47.037 [2024-07-14 22:37:45.901501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:47.037 [2024-07-14 22:37:45.901686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140455 ] 00:29:47.037 [2024-07-14 22:37:46.057102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.296 [2024-07-14 22:37:46.239102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.555 [2024-07-14 22:37:46.509242] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:29:47.555 [2024-07-14 22:37:46.509366] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:29:47.555 [2024-07-14 22:37:46.509392] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:29:47.555 [2024-07-14 22:37:46.509438] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:48.122 [2024-07-14 22:37:47.146591] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:48.690 22:37:47 -- common/autotest_common.sh@643 -- # es=234 00:29:48.690 22:37:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:48.690 22:37:47 -- common/autotest_common.sh@652 -- # es=106 00:29:48.690 22:37:47 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:48.690 22:37:47 -- common/autotest_common.sh@660 -- # es=1 00:29:48.690 22:37:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:48.690 00:29:48.690 real 0m1.659s 00:29:48.690 user 0m1.347s 00:29:48.690 sys 0m0.211s 00:29:48.690 ************************************ 00:29:48.690 END TEST dd_unknown_flag 00:29:48.690 ************************************ 00:29:48.690 22:37:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.690 22:37:47 -- common/autotest_common.sh@10 -- # set +x 00:29:48.690 22:37:47 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:29:48.690 22:37:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:48.690 22:37:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.690 22:37:47 -- common/autotest_common.sh@10 -- # set +x 00:29:48.690 ************************************ 00:29:48.690 START TEST dd_invalid_json 00:29:48.690 ************************************ 00:29:48.690 22:37:47 -- common/autotest_common.sh@1104 -- # invalid_json 00:29:48.690 22:37:47 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:48.690 22:37:47 -- dd/negative_dd.sh@95 -- # : 00:29:48.690 22:37:47 -- common/autotest_common.sh@640 -- # local es=0 00:29:48.690 22:37:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:48.690 22:37:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.690 22:37:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:48.690 22:37:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.690 22:37:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:48.690 22:37:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.690 22:37:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:48.690 22:37:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.690 22:37:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:48.690 22:37:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:48.690 [2024-07-14 22:37:47.639119] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:48.691 [2024-07-14 22:37:47.639351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140502 ] 00:29:48.950 [2024-07-14 22:37:47.809239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.950 [2024-07-14 22:37:47.996968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.950 [2024-07-14 22:37:47.997181] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:29:48.950 [2024-07-14 22:37:47.997282] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:48.950 [2024-07-14 22:37:47.997347] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:49.518 22:37:48 -- common/autotest_common.sh@643 -- # es=234 00:29:49.518 22:37:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:49.518 22:37:48 -- common/autotest_common.sh@652 -- # es=106 00:29:49.518 22:37:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:49.518 22:37:48 -- common/autotest_common.sh@660 -- # es=1 00:29:49.518 22:37:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:49.518 00:29:49.518 real 0m0.790s 00:29:49.518 user 0m0.576s 00:29:49.518 sys 0m0.115s 00:29:49.518 ************************************ 00:29:49.518 END TEST dd_invalid_json 00:29:49.518 ************************************ 00:29:49.518 22:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.518 22:37:48 -- common/autotest_common.sh@10 -- # set +x 00:29:49.518 00:29:49.518 real 0m6.241s 00:29:49.518 user 0m4.228s 00:29:49.518 sys 0m1.651s 00:29:49.518 22:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.518 22:37:48 -- common/autotest_common.sh@10 -- # set +x 00:29:49.518 ************************************ 00:29:49.518 END TEST spdk_dd_negative 00:29:49.518 ************************************ 00:29:49.518 00:29:49.518 real 2m19.792s 00:29:49.518 user 1m49.459s 00:29:49.518 sys 0m20.429s 00:29:49.518 22:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.518 22:37:48 -- common/autotest_common.sh@10 -- # set +x 00:29:49.518 ************************************ 00:29:49.518 END TEST spdk_dd 00:29:49.518 ************************************ 00:29:49.518 22:37:48 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:29:49.518 22:37:48 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:49.518 22:37:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:49.518 22:37:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:49.518 22:37:48 -- common/autotest_common.sh@10 -- # set +x 00:29:49.518 ************************************ 00:29:49.518 START TEST blockdev_nvme 00:29:49.518 ************************************ 00:29:49.518 22:37:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:49.518 * Looking for test storage... 00:29:49.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:49.518 22:37:48 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:49.518 22:37:48 -- bdev/nbd_common.sh@6 -- # set -e 00:29:49.519 22:37:48 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:49.519 22:37:48 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:49.519 22:37:48 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:49.519 22:37:48 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:49.519 22:37:48 -- bdev/blockdev.sh@18 -- # : 00:29:49.519 22:37:48 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:49.519 22:37:48 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:49.519 22:37:48 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:49.519 22:37:48 -- bdev/blockdev.sh@672 -- # uname -s 00:29:49.519 22:37:48 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:49.519 22:37:48 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:49.519 22:37:48 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:29:49.519 22:37:48 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:49.519 22:37:48 -- bdev/blockdev.sh@682 -- # dek= 00:29:49.519 22:37:48 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:49.519 22:37:48 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:49.519 22:37:48 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:49.519 22:37:48 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:29:49.519 22:37:48 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:29:49.519 22:37:48 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:49.519 22:37:48 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=140598 00:29:49.519 22:37:48 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:49.519 22:37:48 -- bdev/blockdev.sh@47 -- # waitforlisten 140598 00:29:49.519 22:37:48 -- common/autotest_common.sh@819 -- # '[' -z 140598 ']' 00:29:49.519 22:37:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.519 22:37:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:49.519 22:37:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.519 22:37:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:49.519 22:37:48 -- common/autotest_common.sh@10 -- # set +x 00:29:49.519 22:37:48 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:49.777 [2024-07-14 22:37:48.650171] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:49.777 [2024-07-14 22:37:48.650412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140598 ] 00:29:49.777 [2024-07-14 22:37:48.817383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.035 [2024-07-14 22:37:49.005527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:50.035 [2024-07-14 22:37:49.005785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.410 22:37:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:51.410 22:37:50 -- common/autotest_common.sh@852 -- # return 0 00:29:51.410 22:37:50 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:51.410 22:37:50 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:29:51.410 22:37:50 -- bdev/blockdev.sh@79 -- # local json 00:29:51.410 22:37:50 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:51.410 22:37:50 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:51.410 22:37:50 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:51.410 22:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.410 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.410 22:37:50 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:51.410 22:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.410 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.410 22:37:50 -- bdev/blockdev.sh@738 -- # cat 00:29:51.410 22:37:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:51.410 22:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.410 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.410 22:37:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:51.410 22:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.410 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.410 22:37:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:51.410 22:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.410 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.410 22:37:50 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:51.410 22:37:50 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:51.410 22:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.410 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:37:50 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:51.668 22:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.668 22:37:50 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:51.668 22:37:50 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:51.668 22:37:50 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8a4f343d-f1d0-48ea-bd7d-1569f437c0c2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8a4f343d-f1d0-48ea-bd7d-1569f437c0c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:51.668 22:37:50 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:51.668 22:37:50 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:29:51.668 22:37:50 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:51.668 22:37:50 -- bdev/blockdev.sh@752 -- # killprocess 140598 00:29:51.668 22:37:50 -- common/autotest_common.sh@926 -- # '[' -z 140598 ']' 00:29:51.668 22:37:50 -- common/autotest_common.sh@930 -- # kill -0 140598 00:29:51.668 22:37:50 -- common/autotest_common.sh@931 -- # uname 00:29:51.668 22:37:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:51.668 22:37:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140598 00:29:51.668 22:37:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:51.668 22:37:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:51.668 22:37:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140598' 00:29:51.668 killing process with pid 140598 00:29:51.668 22:37:50 -- common/autotest_common.sh@945 -- # kill 140598 00:29:51.668 22:37:50 -- common/autotest_common.sh@950 -- # wait 140598 00:29:53.569 22:37:52 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:53.569 22:37:52 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:53.569 22:37:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:53.569 22:37:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:53.569 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:29:53.569 ************************************ 00:29:53.569 START TEST bdev_hello_world 00:29:53.569 ************************************ 00:29:53.569 22:37:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:53.569 [2024-07-14 22:37:52.537396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:53.569 [2024-07-14 22:37:52.538336] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140690 ] 00:29:53.827 [2024-07-14 22:37:52.715965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.827 [2024-07-14 22:37:52.884315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.392 [2024-07-14 22:37:53.275124] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:54.392 [2024-07-14 22:37:53.275186] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:54.392 [2024-07-14 22:37:53.275227] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:54.392 [2024-07-14 22:37:53.277652] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:54.392 [2024-07-14 22:37:53.278136] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:54.392 [2024-07-14 22:37:53.278182] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:54.392 [2024-07-14 22:37:53.278472] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:54.392 00:29:54.392 [2024-07-14 22:37:53.278518] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:55.326 ************************************ 00:29:55.326 END TEST bdev_hello_world 00:29:55.326 ************************************ 00:29:55.326 00:29:55.326 real 0m1.668s 00:29:55.326 user 0m1.278s 00:29:55.326 sys 0m0.284s 00:29:55.326 22:37:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.326 22:37:54 -- common/autotest_common.sh@10 -- # set +x 00:29:55.326 22:37:54 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:55.326 22:37:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:55.326 22:37:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.326 22:37:54 -- common/autotest_common.sh@10 -- # set +x 00:29:55.326 ************************************ 00:29:55.326 START TEST bdev_bounds 00:29:55.326 ************************************ 00:29:55.326 22:37:54 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:55.326 22:37:54 -- bdev/blockdev.sh@288 -- # bdevio_pid=140757 00:29:55.326 Process bdevio pid: 140757 00:29:55.326 22:37:54 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:55.326 22:37:54 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 140757' 00:29:55.326 22:37:54 -- bdev/blockdev.sh@291 -- # waitforlisten 140757 00:29:55.326 22:37:54 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:55.326 22:37:54 -- common/autotest_common.sh@819 -- # '[' -z 140757 ']' 00:29:55.326 22:37:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.326 22:37:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.326 22:37:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.326 22:37:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:55.326 22:37:54 -- common/autotest_common.sh@10 -- # set +x 00:29:55.326 [2024-07-14 22:37:54.244748] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:55.326 [2024-07-14 22:37:54.244938] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140757 ] 00:29:55.584 [2024-07-14 22:37:54.405586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:55.584 [2024-07-14 22:37:54.570574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.584 [2024-07-14 22:37:54.570711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.584 [2024-07-14 22:37:54.570718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.152 22:37:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:56.152 22:37:55 -- common/autotest_common.sh@852 -- # return 0 00:29:56.152 22:37:55 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:56.410 I/O targets: 00:29:56.410 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:56.410 00:29:56.410 00:29:56.410 CUnit - A unit testing framework for C - Version 2.1-3 00:29:56.410 http://cunit.sourceforge.net/ 00:29:56.410 00:29:56.410 00:29:56.410 Suite: bdevio tests on: Nvme0n1 00:29:56.410 Test: blockdev write read block ...passed 00:29:56.410 Test: blockdev write zeroes read block ...passed 00:29:56.410 Test: blockdev write zeroes read no split ...passed 00:29:56.410 Test: blockdev write zeroes read split ...passed 00:29:56.410 Test: blockdev write zeroes read split partial ...passed 00:29:56.410 Test: blockdev reset ...[2024-07-14 22:37:55.280420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:56.410 [2024-07-14 22:37:55.284150] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:56.410 passed 00:29:56.410 Test: blockdev write read 8 blocks ...passed 00:29:56.410 Test: blockdev write read size > 128k ...passed 00:29:56.410 Test: blockdev write read invalid size ...passed 00:29:56.410 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:56.410 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:56.410 Test: blockdev write read max offset ...passed 00:29:56.410 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:56.410 Test: blockdev writev readv 8 blocks ...passed 00:29:56.410 Test: blockdev writev readv 30 x 1block ...passed 00:29:56.410 Test: blockdev writev readv block ...passed 00:29:56.410 Test: blockdev writev readv size > 128k ...passed 00:29:56.410 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:56.410 Test: blockdev comparev and writev ...[2024-07-14 22:37:55.293107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xb5a0d000 len:0x1000 00:29:56.410 [2024-07-14 22:37:55.293260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:56.410 passed 00:29:56.410 Test: blockdev nvme passthru rw ...passed 00:29:56.410 Test: blockdev nvme passthru vendor specific ...[2024-07-14 22:37:55.294057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:56.410 [2024-07-14 22:37:55.294159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:56.410 passed 00:29:56.410 Test: blockdev nvme admin passthru ...passed 00:29:56.410 Test: blockdev copy ...passed 00:29:56.410 00:29:56.410 Run Summary: Type Total Ran Passed Failed Inactive 00:29:56.410 suites 1 1 n/a 0 0 00:29:56.410 tests 23 23 23 0 0 00:29:56.410 asserts 152 152 152 0 n/a 00:29:56.410 00:29:56.410 Elapsed time = 0.200 seconds 00:29:56.410 0 00:29:56.410 22:37:55 -- bdev/blockdev.sh@293 -- # killprocess 140757 00:29:56.410 22:37:55 -- common/autotest_common.sh@926 -- # '[' -z 140757 ']' 00:29:56.410 22:37:55 -- common/autotest_common.sh@930 -- # kill -0 140757 00:29:56.410 22:37:55 -- common/autotest_common.sh@931 -- # uname 00:29:56.410 22:37:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:56.410 22:37:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140757 00:29:56.410 22:37:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:56.410 killing process with pid 140757 00:29:56.410 22:37:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:56.410 22:37:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140757' 00:29:56.410 22:37:55 -- common/autotest_common.sh@945 -- # kill 140757 00:29:56.410 22:37:55 -- common/autotest_common.sh@950 -- # wait 140757 00:29:57.345 22:37:56 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:57.345 ************************************ 00:29:57.345 END TEST bdev_bounds 00:29:57.345 ************************************ 00:29:57.345 00:29:57.345 real 0m2.096s 00:29:57.345 user 0m4.936s 00:29:57.345 sys 0m0.325s 00:29:57.345 22:37:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.345 22:37:56 -- common/autotest_common.sh@10 -- # set +x 00:29:57.345 22:37:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:57.345 22:37:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:57.345 22:37:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:57.345 22:37:56 -- common/autotest_common.sh@10 -- # set +x 00:29:57.345 ************************************ 00:29:57.345 START TEST bdev_nbd 00:29:57.345 ************************************ 00:29:57.345 22:37:56 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:57.345 22:37:56 -- bdev/blockdev.sh@298 -- # uname -s 00:29:57.345 22:37:56 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:57.345 22:37:56 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:57.345 22:37:56 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:57.345 22:37:56 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:57.345 22:37:56 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:57.345 22:37:56 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:57.345 22:37:56 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:57.345 22:37:56 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:57.345 22:37:56 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:57.345 22:37:56 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:57.345 22:37:56 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:57.345 22:37:56 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:57.345 22:37:56 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:57.345 22:37:56 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:57.345 22:37:56 -- bdev/blockdev.sh@316 -- # nbd_pid=140819 00:29:57.345 22:37:56 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:57.345 22:37:56 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:57.345 22:37:56 -- bdev/blockdev.sh@318 -- # waitforlisten 140819 /var/tmp/spdk-nbd.sock 00:29:57.345 22:37:56 -- common/autotest_common.sh@819 -- # '[' -z 140819 ']' 00:29:57.345 22:37:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:57.345 22:37:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:57.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:57.345 22:37:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:57.345 22:37:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:57.345 22:37:56 -- common/autotest_common.sh@10 -- # set +x 00:29:57.346 [2024-07-14 22:37:56.398820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:57.346 [2024-07-14 22:37:56.398995] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.604 [2024-07-14 22:37:56.551058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.863 [2024-07-14 22:37:56.724929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.431 22:37:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:58.431 22:37:57 -- common/autotest_common.sh@852 -- # return 0 00:29:58.431 22:37:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@24 -- # local i 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:58.431 22:37:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:58.690 22:37:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:58.690 22:37:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:58.690 22:37:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:58.690 22:37:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:58.690 22:37:57 -- common/autotest_common.sh@857 -- # local i 00:29:58.690 22:37:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:58.690 22:37:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:58.690 22:37:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:58.690 22:37:57 -- common/autotest_common.sh@861 -- # break 00:29:58.690 22:37:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:58.690 22:37:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:58.690 22:37:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:58.690 1+0 records in 00:29:58.690 1+0 records out 00:29:58.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051781 s, 7.9 MB/s 00:29:58.690 22:37:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.690 22:37:57 -- common/autotest_common.sh@874 -- # size=4096 00:29:58.690 22:37:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.690 22:37:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:58.690 22:37:57 -- common/autotest_common.sh@877 -- # return 0 00:29:58.690 22:37:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:58.690 22:37:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:58.690 22:37:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:58.950 { 00:29:58.950 "nbd_device": "/dev/nbd0", 00:29:58.950 "bdev_name": "Nvme0n1" 00:29:58.950 } 00:29:58.950 ]' 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:58.950 { 00:29:58.950 "nbd_device": "/dev/nbd0", 00:29:58.950 "bdev_name": "Nvme0n1" 00:29:58.950 } 00:29:58.950 ]' 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@51 -- # local i 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.950 22:37:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@41 -- # break 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@45 -- # return 0 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.209 22:37:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@65 -- # true 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@65 -- # count=0 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@122 -- # count=0 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@127 -- # return 0 00:29:59.468 22:37:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@12 -- # local i 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:59.468 22:37:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:59.727 /dev/nbd0 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:59.727 22:37:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:59.727 22:37:58 -- common/autotest_common.sh@857 -- # local i 00:29:59.727 22:37:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:59.727 22:37:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:59.727 22:37:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:59.727 22:37:58 -- common/autotest_common.sh@861 -- # break 00:29:59.727 22:37:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:59.727 22:37:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:59.727 22:37:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.727 1+0 records in 00:29:59.727 1+0 records out 00:29:59.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048125 s, 8.5 MB/s 00:29:59.727 22:37:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.727 22:37:58 -- common/autotest_common.sh@874 -- # size=4096 00:29:59.727 22:37:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.727 22:37:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:59.727 22:37:58 -- common/autotest_common.sh@877 -- # return 0 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.727 22:37:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:59.987 22:37:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:59.987 { 00:29:59.987 "nbd_device": "/dev/nbd0", 00:29:59.987 "bdev_name": "Nvme0n1" 00:29:59.987 } 00:29:59.987 ]' 00:29:59.987 22:37:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:59.987 { 00:29:59.987 "nbd_device": "/dev/nbd0", 00:29:59.987 "bdev_name": "Nvme0n1" 00:29:59.987 } 00:29:59.987 ]' 00:29:59.987 22:37:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@65 -- # count=1 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@95 -- # count=1 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:59.987 256+0 records in 00:29:59.987 256+0 records out 00:29:59.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00982349 s, 107 MB/s 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.987 22:37:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:00.247 256+0 records in 00:30:00.247 256+0 records out 00:30:00.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0624176 s, 16.8 MB/s 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@51 -- # local i 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.247 22:37:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@41 -- # break 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.506 22:37:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@65 -- # true 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@65 -- # count=0 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@104 -- # count=0 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@109 -- # return 0 00:30:00.766 22:37:59 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:00.766 22:37:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:01.039 malloc_lvol_verify 00:30:01.039 22:37:59 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:01.039 8f2480b3-12be-4c55-806e-cf45c1233742 00:30:01.039 22:38:00 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:01.316 a8f5c1b0-fae6-4e6b-a45f-be1c6621f4cf 00:30:01.316 22:38:00 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:01.575 /dev/nbd0 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:01.575 mke2fs 1.45.5 (07-Jan-2020) 00:30:01.575 00:30:01.575 Filesystem too small for a journal 00:30:01.575 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:01.575 00:30:01.575 Allocating group tables: 0/1 done 00:30:01.575 Writing inode tables: 0/1 done 00:30:01.575 Writing superblocks and filesystem accounting information: 0/1 done 00:30:01.575 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@51 -- # local i 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.575 22:38:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@41 -- # break 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@45 -- # return 0 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:01.834 22:38:00 -- bdev/nbd_common.sh@147 -- # return 0 00:30:01.834 22:38:00 -- bdev/blockdev.sh@324 -- # killprocess 140819 00:30:01.834 22:38:00 -- common/autotest_common.sh@926 -- # '[' -z 140819 ']' 00:30:01.834 22:38:00 -- common/autotest_common.sh@930 -- # kill -0 140819 00:30:01.834 22:38:00 -- common/autotest_common.sh@931 -- # uname 00:30:01.834 22:38:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:01.834 22:38:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140819 00:30:01.834 22:38:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:01.834 killing process with pid 140819 00:30:01.834 22:38:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:01.834 22:38:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140819' 00:30:01.834 22:38:00 -- common/autotest_common.sh@945 -- # kill 140819 00:30:01.834 22:38:00 -- common/autotest_common.sh@950 -- # wait 140819 00:30:02.770 22:38:01 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:02.770 00:30:02.770 real 0m5.395s 00:30:02.770 user 0m7.858s 00:30:02.770 sys 0m1.004s 00:30:02.770 22:38:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.770 ************************************ 00:30:02.770 22:38:01 -- common/autotest_common.sh@10 -- # set +x 00:30:02.770 END TEST bdev_nbd 00:30:02.770 ************************************ 00:30:02.770 22:38:01 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:02.770 22:38:01 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:30:02.770 skipping fio tests on NVMe due to multi-ns failures. 00:30:02.770 22:38:01 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:02.770 22:38:01 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:02.770 22:38:01 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:02.770 22:38:01 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:02.770 22:38:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:02.770 22:38:01 -- common/autotest_common.sh@10 -- # set +x 00:30:02.770 ************************************ 00:30:02.770 START TEST bdev_verify 00:30:02.770 ************************************ 00:30:02.770 22:38:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:03.029 [2024-07-14 22:38:01.843345] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:03.029 [2024-07-14 22:38:01.843493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141012 ] 00:30:03.029 [2024-07-14 22:38:01.999479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:03.288 [2024-07-14 22:38:02.173670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.288 [2024-07-14 22:38:02.173678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.547 Running I/O for 5 seconds... 00:30:08.813 00:30:08.813 Latency(us) 00:30:08.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.813 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.813 Verification LBA range: start 0x0 length 0xa0000 00:30:08.813 Nvme0n1 : 5.01 14051.52 54.89 0.00 0.00 9073.59 599.51 12332.68 00:30:08.813 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:08.813 Verification LBA range: start 0xa0000 length 0xa0000 00:30:08.813 Nvme0n1 : 5.01 13849.66 54.10 0.00 0.00 9206.41 297.89 14596.65 00:30:08.813 =================================================================================================================== 00:30:08.813 Total : 27901.18 108.99 0.00 0.00 9139.52 297.89 14596.65 00:30:16.929 00:30:16.929 real 0m12.947s 00:30:16.929 user 0m24.733s 00:30:16.929 sys 0m0.305s 00:30:16.929 22:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.929 22:38:14 -- common/autotest_common.sh@10 -- # set +x 00:30:16.929 ************************************ 00:30:16.929 END TEST bdev_verify 00:30:16.929 ************************************ 00:30:16.929 22:38:14 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:16.929 22:38:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:16.929 22:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:16.929 22:38:14 -- common/autotest_common.sh@10 -- # set +x 00:30:16.929 ************************************ 00:30:16.929 START TEST bdev_verify_big_io 00:30:16.929 ************************************ 00:30:16.929 22:38:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:16.929 [2024-07-14 22:38:14.849758] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:16.929 [2024-07-14 22:38:14.850149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141197 ] 00:30:16.929 [2024-07-14 22:38:15.009333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:16.929 [2024-07-14 22:38:15.219816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.929 [2024-07-14 22:38:15.219825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.929 Running I/O for 5 seconds... 00:30:22.194 00:30:22.194 Latency(us) 00:30:22.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.195 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:22.195 Verification LBA range: start 0x0 length 0xa000 00:30:22.195 Nvme0n1 : 5.04 1901.54 118.85 0.00 0.00 66482.06 811.75 107240.73 00:30:22.195 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:22.195 Verification LBA range: start 0xa000 length 0xa000 00:30:22.195 Nvme0n1 : 5.05 1739.20 108.70 0.00 0.00 72590.81 400.29 113436.86 00:30:22.195 =================================================================================================================== 00:30:22.195 Total : 3640.74 227.55 0.00 0.00 69400.74 400.29 113436.86 00:30:23.137 00:30:23.137 real 0m7.213s 00:30:23.137 user 0m13.232s 00:30:23.137 sys 0m0.283s 00:30:23.137 22:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.137 22:38:22 -- common/autotest_common.sh@10 -- # set +x 00:30:23.137 ************************************ 00:30:23.137 END TEST bdev_verify_big_io 00:30:23.137 ************************************ 00:30:23.137 22:38:22 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.137 22:38:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:23.137 22:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:23.137 22:38:22 -- common/autotest_common.sh@10 -- # set +x 00:30:23.137 ************************************ 00:30:23.137 START TEST bdev_write_zeroes 00:30:23.137 ************************************ 00:30:23.137 22:38:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.137 [2024-07-14 22:38:22.130168] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:23.137 [2024-07-14 22:38:22.130390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141303 ] 00:30:23.417 [2024-07-14 22:38:22.300851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.679 [2024-07-14 22:38:22.521351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.936 Running I/O for 1 seconds... 00:30:25.303 00:30:25.303 Latency(us) 00:30:25.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.303 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.303 Nvme0n1 : 1.00 68340.56 266.96 0.00 0.00 1867.89 580.89 10962.39 00:30:25.303 =================================================================================================================== 00:30:25.303 Total : 68340.56 266.96 0.00 0.00 1867.89 580.89 10962.39 00:30:26.237 00:30:26.237 real 0m2.992s 00:30:26.237 user 0m2.561s 00:30:26.237 sys 0m0.332s 00:30:26.237 ************************************ 00:30:26.237 END TEST bdev_write_zeroes 00:30:26.237 ************************************ 00:30:26.237 22:38:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.237 22:38:25 -- common/autotest_common.sh@10 -- # set +x 00:30:26.237 22:38:25 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.237 22:38:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:26.237 22:38:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.237 22:38:25 -- common/autotest_common.sh@10 -- # set +x 00:30:26.237 ************************************ 00:30:26.237 START TEST bdev_json_nonenclosed 00:30:26.237 ************************************ 00:30:26.237 22:38:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.237 [2024-07-14 22:38:25.172448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:26.237 [2024-07-14 22:38:25.172823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141381 ] 00:30:26.494 [2024-07-14 22:38:25.341879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.494 [2024-07-14 22:38:25.554443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.494 [2024-07-14 22:38:25.554734] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:26.494 [2024-07-14 22:38:25.554782] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:27.058 00:30:27.058 real 0m0.841s 00:30:27.058 user 0m0.582s 00:30:27.058 sys 0m0.159s 00:30:27.058 22:38:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.058 ************************************ 00:30:27.058 END TEST bdev_json_nonenclosed 00:30:27.058 ************************************ 00:30:27.058 22:38:25 -- common/autotest_common.sh@10 -- # set +x 00:30:27.058 22:38:25 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:27.058 22:38:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:27.058 22:38:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:27.058 22:38:25 -- common/autotest_common.sh@10 -- # set +x 00:30:27.058 ************************************ 00:30:27.058 START TEST bdev_json_nonarray 00:30:27.058 ************************************ 00:30:27.058 22:38:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:27.058 [2024-07-14 22:38:26.066077] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:27.058 [2024-07-14 22:38:26.066283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141419 ] 00:30:27.315 [2024-07-14 22:38:26.233832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.571 [2024-07-14 22:38:26.401308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.571 [2024-07-14 22:38:26.401544] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:27.571 [2024-07-14 22:38:26.401596] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:27.829 00:30:27.829 real 0m0.725s 00:30:27.829 user 0m0.479s 00:30:27.829 sys 0m0.146s 00:30:27.829 22:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.829 ************************************ 00:30:27.829 END TEST bdev_json_nonarray 00:30:27.829 ************************************ 00:30:27.829 22:38:26 -- common/autotest_common.sh@10 -- # set +x 00:30:27.829 22:38:26 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:30:27.829 22:38:26 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:30:27.829 22:38:26 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:30:27.829 22:38:26 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:27.829 22:38:26 -- bdev/blockdev.sh@809 -- # cleanup 00:30:27.829 22:38:26 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:27.829 22:38:26 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:27.829 22:38:26 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:30:27.829 22:38:26 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:30:27.829 22:38:26 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:30:27.829 22:38:26 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:30:27.829 00:30:27.829 real 0m38.287s 00:30:27.829 user 1m0.118s 00:30:27.829 sys 0m3.575s 00:30:27.829 22:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.829 22:38:26 -- common/autotest_common.sh@10 -- # set +x 00:30:27.829 ************************************ 00:30:27.829 END TEST blockdev_nvme 00:30:27.829 ************************************ 00:30:27.829 22:38:26 -- spdk/autotest.sh@219 -- # uname -s 00:30:27.829 22:38:26 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:30:27.829 22:38:26 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:30:27.829 22:38:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:27.829 22:38:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:27.829 22:38:26 -- common/autotest_common.sh@10 -- # set +x 00:30:27.829 ************************************ 00:30:27.829 START TEST blockdev_nvme_gpt 00:30:27.829 ************************************ 00:30:27.829 22:38:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:30:27.829 * Looking for test storage... 00:30:27.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:27.829 22:38:26 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:27.829 22:38:26 -- bdev/nbd_common.sh@6 -- # set -e 00:30:27.829 22:38:26 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:27.829 22:38:26 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:27.829 22:38:26 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:27.829 22:38:26 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:28.087 22:38:26 -- bdev/blockdev.sh@18 -- # : 00:30:28.087 22:38:26 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:30:28.087 22:38:26 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:30:28.087 22:38:26 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:30:28.087 22:38:26 -- bdev/blockdev.sh@672 -- # uname -s 00:30:28.087 22:38:26 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:30:28.087 22:38:26 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:30:28.087 22:38:26 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:30:28.087 22:38:26 -- bdev/blockdev.sh@681 -- # crypto_device= 00:30:28.087 22:38:26 -- bdev/blockdev.sh@682 -- # dek= 00:30:28.087 22:38:26 -- bdev/blockdev.sh@683 -- # env_ctx= 00:30:28.087 22:38:26 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:30:28.087 22:38:26 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:30:28.087 22:38:26 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:30:28.087 22:38:26 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:30:28.087 22:38:26 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:30:28.087 22:38:26 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=141502 00:30:28.087 22:38:26 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:28.087 22:38:26 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:28.087 22:38:26 -- bdev/blockdev.sh@47 -- # waitforlisten 141502 00:30:28.087 22:38:26 -- common/autotest_common.sh@819 -- # '[' -z 141502 ']' 00:30:28.087 22:38:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.087 22:38:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.087 22:38:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.087 22:38:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.087 22:38:26 -- common/autotest_common.sh@10 -- # set +x 00:30:28.087 [2024-07-14 22:38:26.987880] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:28.087 [2024-07-14 22:38:26.988677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141502 ] 00:30:28.087 [2024-07-14 22:38:27.152392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.345 [2024-07-14 22:38:27.341884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:28.345 [2024-07-14 22:38:27.342097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.719 22:38:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:29.719 22:38:28 -- common/autotest_common.sh@852 -- # return 0 00:30:29.719 22:38:28 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:30:29.719 22:38:28 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:30:29.719 22:38:28 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:29.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:29.978 Waiting for block devices as requested 00:30:29.978 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:29.978 22:38:28 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:30:29.978 22:38:28 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:30:29.978 22:38:28 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:30:29.978 22:38:28 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:30:29.978 22:38:28 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:29.978 22:38:28 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:30:29.978 22:38:28 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:30:29.978 22:38:28 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:29.978 22:38:28 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:29.978 22:38:28 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:30:29.978 22:38:28 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:30:29.978 22:38:28 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:30:29.978 22:38:28 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:30:29.978 22:38:28 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:30:29.978 22:38:28 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:30:29.978 22:38:28 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:30:29.978 22:38:28 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:30:29.978 BYT; 00:30:29.978 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:30:29.978 22:38:28 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:30:29.978 BYT; 00:30:29.978 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:30:29.978 22:38:28 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:30:29.978 22:38:28 -- bdev/blockdev.sh@114 -- # break 00:30:29.978 22:38:28 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:30:29.978 22:38:28 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:30:29.978 22:38:28 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:29.978 22:38:28 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:30:30.913 22:38:29 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:30:30.913 22:38:29 -- scripts/common.sh@410 -- # local spdk_guid 00:30:30.913 22:38:29 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:30.913 22:38:29 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:30.913 22:38:29 -- scripts/common.sh@415 -- # IFS='()' 00:30:30.913 22:38:29 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:30:30.913 22:38:29 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:30.913 22:38:29 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:30:30.913 22:38:29 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:30.913 22:38:29 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:30.913 22:38:29 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:30.913 22:38:29 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:30:30.913 22:38:29 -- scripts/common.sh@422 -- # local spdk_guid 00:30:30.913 22:38:29 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:30.913 22:38:29 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:30.913 22:38:29 -- scripts/common.sh@427 -- # IFS='()' 00:30:30.913 22:38:29 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:30:30.913 22:38:29 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:30.913 22:38:29 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:30:30.913 22:38:29 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:30.913 22:38:29 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:30.913 22:38:29 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:30.913 22:38:29 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:30:31.849 The operation has completed successfully. 00:30:31.849 22:38:30 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:30:33.226 The operation has completed successfully. 00:30:33.226 22:38:31 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:33.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:33.486 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:34.862 22:38:33 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 [] 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:30:34.862 22:38:33 -- bdev/blockdev.sh@79 -- # local json 00:30:34.862 22:38:33 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:30:34.862 22:38:33 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:34.862 22:38:33 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@738 -- # cat 00:30:34.862 22:38:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:30:34.862 22:38:33 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:30:34.862 22:38:33 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:30:34.862 22:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.862 22:38:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.862 22:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.862 22:38:33 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:30:34.862 22:38:33 -- bdev/blockdev.sh@747 -- # jq -r .name 00:30:34.862 22:38:33 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:30:34.862 22:38:33 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:30:34.862 22:38:33 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:30:34.862 22:38:33 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:30:34.862 22:38:33 -- bdev/blockdev.sh@752 -- # killprocess 141502 00:30:34.862 22:38:33 -- common/autotest_common.sh@926 -- # '[' -z 141502 ']' 00:30:34.862 22:38:33 -- common/autotest_common.sh@930 -- # kill -0 141502 00:30:34.862 22:38:33 -- common/autotest_common.sh@931 -- # uname 00:30:34.862 22:38:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:34.862 22:38:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141502 00:30:34.862 22:38:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:34.862 22:38:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:34.862 killing process with pid 141502 00:30:34.862 22:38:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141502' 00:30:34.862 22:38:33 -- common/autotest_common.sh@945 -- # kill 141502 00:30:34.862 22:38:33 -- common/autotest_common.sh@950 -- # wait 141502 00:30:36.761 22:38:35 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:36.761 22:38:35 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:36.761 22:38:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:36.761 22:38:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.761 22:38:35 -- common/autotest_common.sh@10 -- # set +x 00:30:36.761 ************************************ 00:30:36.761 START TEST bdev_hello_world 00:30:36.761 ************************************ 00:30:36.761 22:38:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:37.019 [2024-07-14 22:38:35.856449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:37.019 [2024-07-14 22:38:35.856682] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142091 ] 00:30:37.019 [2024-07-14 22:38:36.024864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.277 [2024-07-14 22:38:36.178365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.536 [2024-07-14 22:38:36.550028] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:37.536 [2024-07-14 22:38:36.550108] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:30:37.536 [2024-07-14 22:38:36.550138] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:37.536 [2024-07-14 22:38:36.552569] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:37.536 [2024-07-14 22:38:36.553114] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:37.536 [2024-07-14 22:38:36.553164] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:37.536 [2024-07-14 22:38:36.553519] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:37.536 00:30:37.536 [2024-07-14 22:38:36.553635] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:38.470 00:30:38.470 real 0m1.682s 00:30:38.470 user 0m1.340s 00:30:38.470 sys 0m0.226s 00:30:38.470 22:38:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.470 22:38:37 -- common/autotest_common.sh@10 -- # set +x 00:30:38.470 ************************************ 00:30:38.470 END TEST bdev_hello_world 00:30:38.470 ************************************ 00:30:38.470 22:38:37 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:38.470 22:38:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:38.470 22:38:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:38.470 22:38:37 -- common/autotest_common.sh@10 -- # set +x 00:30:38.470 ************************************ 00:30:38.470 START TEST bdev_bounds 00:30:38.470 ************************************ 00:30:38.470 22:38:37 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:30:38.470 22:38:37 -- bdev/blockdev.sh@288 -- # bdevio_pid=142135 00:30:38.470 22:38:37 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:38.470 Process bdevio pid: 142135 00:30:38.470 22:38:37 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:38.470 22:38:37 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 142135' 00:30:38.470 22:38:37 -- bdev/blockdev.sh@291 -- # waitforlisten 142135 00:30:38.470 22:38:37 -- common/autotest_common.sh@819 -- # '[' -z 142135 ']' 00:30:38.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.470 22:38:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.470 22:38:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:38.470 22:38:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.470 22:38:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:38.470 22:38:37 -- common/autotest_common.sh@10 -- # set +x 00:30:38.728 [2024-07-14 22:38:37.589655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:38.728 [2024-07-14 22:38:37.589832] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142135 ] 00:30:38.728 [2024-07-14 22:38:37.765153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:38.987 [2024-07-14 22:38:37.942255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.987 [2024-07-14 22:38:37.942659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.987 [2024-07-14 22:38:37.942599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.554 22:38:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:39.554 22:38:38 -- common/autotest_common.sh@852 -- # return 0 00:30:39.554 22:38:38 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:39.554 I/O targets: 00:30:39.554 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:30:39.554 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:30:39.554 00:30:39.554 00:30:39.554 CUnit - A unit testing framework for C - Version 2.1-3 00:30:39.554 http://cunit.sourceforge.net/ 00:30:39.554 00:30:39.554 00:30:39.554 Suite: bdevio tests on: Nvme0n1p2 00:30:39.554 Test: blockdev write read block ...passed 00:30:39.554 Test: blockdev write zeroes read block ...passed 00:30:39.554 Test: blockdev write zeroes read no split ...passed 00:30:39.554 Test: blockdev write zeroes read split ...passed 00:30:39.554 Test: blockdev write zeroes read split partial ...passed 00:30:39.554 Test: blockdev reset ...[2024-07-14 22:38:38.611641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:39.554 [2024-07-14 22:38:38.615254] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:39.554 passed 00:30:39.554 Test: blockdev write read 8 blocks ...passed 00:30:39.554 Test: blockdev write read size > 128k ...passed 00:30:39.554 Test: blockdev write read invalid size ...passed 00:30:39.554 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:39.554 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:39.554 Test: blockdev write read max offset ...passed 00:30:39.554 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:39.554 Test: blockdev writev readv 8 blocks ...passed 00:30:39.554 Test: blockdev writev readv 30 x 1block ...passed 00:30:39.554 Test: blockdev writev readv block ...passed 00:30:39.554 Test: blockdev writev readv size > 128k ...passed 00:30:39.554 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:39.554 Test: blockdev comparev and writev ...[2024-07-14 22:38:38.625760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x11cc0b000 len:0x1000 00:30:39.554 [2024-07-14 22:38:38.626009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:39.554 passed 00:30:39.554 Test: blockdev nvme passthru rw ...passed 00:30:39.554 Test: blockdev nvme passthru vendor specific ...passed 00:30:39.554 Test: blockdev nvme admin passthru ...passed 00:30:39.813 Test: blockdev copy ...passed 00:30:39.813 Suite: bdevio tests on: Nvme0n1p1 00:30:39.813 Test: blockdev write read block ...passed 00:30:39.813 Test: blockdev write zeroes read block ...passed 00:30:39.813 Test: blockdev write zeroes read no split ...passed 00:30:39.813 Test: blockdev write zeroes read split ...passed 00:30:39.813 Test: blockdev write zeroes read split partial ...passed 00:30:39.813 Test: blockdev reset ...[2024-07-14 22:38:38.678253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:39.813 [2024-07-14 22:38:38.681563] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:39.813 passed 00:30:39.813 Test: blockdev write read 8 blocks ...passed 00:30:39.813 Test: blockdev write read size > 128k ...passed 00:30:39.813 Test: blockdev write read invalid size ...passed 00:30:39.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:39.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:39.813 Test: blockdev write read max offset ...passed 00:30:39.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:39.813 Test: blockdev writev readv 8 blocks ...passed 00:30:39.813 Test: blockdev writev readv 30 x 1block ...passed 00:30:39.813 Test: blockdev writev readv block ...passed 00:30:39.813 Test: blockdev writev readv size > 128k ...passed 00:30:39.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:39.813 Test: blockdev comparev and writev ...[2024-07-14 22:38:38.691420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x11cc0d000 len:0x1000 00:30:39.813 [2024-07-14 22:38:38.691602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:39.813 passed 00:30:39.813 Test: blockdev nvme passthru rw ...passed 00:30:39.813 Test: blockdev nvme passthru vendor specific ...passed 00:30:39.813 Test: blockdev nvme admin passthru ...passed 00:30:39.813 Test: blockdev copy ...passed 00:30:39.813 00:30:39.813 Run Summary: Type Total Ran Passed Failed Inactive 00:30:39.813 suites 2 2 n/a 0 0 00:30:39.813 tests 46 46 46 0 0 00:30:39.813 asserts 284 284 284 0 n/a 00:30:39.813 00:30:39.813 Elapsed time = 0.346 seconds 00:30:39.813 0 00:30:39.813 22:38:38 -- bdev/blockdev.sh@293 -- # killprocess 142135 00:30:39.813 22:38:38 -- common/autotest_common.sh@926 -- # '[' -z 142135 ']' 00:30:39.813 22:38:38 -- common/autotest_common.sh@930 -- # kill -0 142135 00:30:39.813 22:38:38 -- common/autotest_common.sh@931 -- # uname 00:30:39.813 22:38:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:39.813 22:38:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142135 00:30:39.813 22:38:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:39.813 22:38:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:39.813 22:38:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142135' 00:30:39.813 killing process with pid 142135 00:30:39.813 22:38:38 -- common/autotest_common.sh@945 -- # kill 142135 00:30:39.813 22:38:38 -- common/autotest_common.sh@950 -- # wait 142135 00:30:40.748 22:38:39 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:40.748 00:30:40.748 real 0m2.079s 00:30:40.748 user 0m4.788s 00:30:40.748 sys 0m0.345s 00:30:40.748 22:38:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.748 ************************************ 00:30:40.748 22:38:39 -- common/autotest_common.sh@10 -- # set +x 00:30:40.748 END TEST bdev_bounds 00:30:40.748 ************************************ 00:30:40.748 22:38:39 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:40.748 22:38:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:30:40.748 22:38:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.748 22:38:39 -- common/autotest_common.sh@10 -- # set +x 00:30:40.748 ************************************ 00:30:40.748 START TEST bdev_nbd 00:30:40.748 ************************************ 00:30:40.748 22:38:39 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:40.748 22:38:39 -- bdev/blockdev.sh@298 -- # uname -s 00:30:40.748 22:38:39 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:40.748 22:38:39 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:40.748 22:38:39 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:40.749 22:38:39 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:30:40.749 22:38:39 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:40.749 22:38:39 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:30:40.749 22:38:39 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:40.749 22:38:39 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:30:40.749 22:38:39 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:40.749 22:38:39 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:30:40.749 22:38:39 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:30:40.749 22:38:39 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:40.749 22:38:39 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:30:40.749 22:38:39 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:40.749 22:38:39 -- bdev/blockdev.sh@316 -- # nbd_pid=142198 00:30:40.749 22:38:39 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:40.749 22:38:39 -- bdev/blockdev.sh@318 -- # waitforlisten 142198 /var/tmp/spdk-nbd.sock 00:30:40.749 22:38:39 -- common/autotest_common.sh@819 -- # '[' -z 142198 ']' 00:30:40.749 22:38:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:40.749 22:38:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:40.749 22:38:39 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:40.749 22:38:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:40.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:40.749 22:38:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:40.749 22:38:39 -- common/autotest_common.sh@10 -- # set +x 00:30:40.749 [2024-07-14 22:38:39.715493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:40.749 [2024-07-14 22:38:39.715900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.007 [2024-07-14 22:38:39.867727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.007 [2024-07-14 22:38:40.045846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.943 22:38:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:41.943 22:38:40 -- common/autotest_common.sh@852 -- # return 0 00:30:41.943 22:38:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@24 -- # local i 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:41.943 22:38:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:41.943 22:38:40 -- common/autotest_common.sh@857 -- # local i 00:30:41.943 22:38:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:41.943 22:38:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:41.943 22:38:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:41.943 22:38:40 -- common/autotest_common.sh@861 -- # break 00:30:41.943 22:38:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:41.943 22:38:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:41.943 22:38:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:41.943 1+0 records in 00:30:41.943 1+0 records out 00:30:41.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483838 s, 8.5 MB/s 00:30:41.943 22:38:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:41.943 22:38:40 -- common/autotest_common.sh@874 -- # size=4096 00:30:41.943 22:38:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:41.943 22:38:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:41.943 22:38:40 -- common/autotest_common.sh@877 -- # return 0 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:41.943 22:38:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:30:42.202 22:38:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:42.202 22:38:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:42.202 22:38:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:42.202 22:38:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:42.202 22:38:41 -- common/autotest_common.sh@857 -- # local i 00:30:42.202 22:38:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:42.202 22:38:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:42.202 22:38:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:42.202 22:38:41 -- common/autotest_common.sh@861 -- # break 00:30:42.202 22:38:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:42.202 22:38:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:42.202 22:38:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:42.202 1+0 records in 00:30:42.202 1+0 records out 00:30:42.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889981 s, 4.6 MB/s 00:30:42.202 22:38:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.202 22:38:41 -- common/autotest_common.sh@874 -- # size=4096 00:30:42.202 22:38:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.202 22:38:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:42.202 22:38:41 -- common/autotest_common.sh@877 -- # return 0 00:30:42.202 22:38:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:42.202 22:38:41 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:42.202 22:38:41 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:42.461 { 00:30:42.461 "nbd_device": "/dev/nbd0", 00:30:42.461 "bdev_name": "Nvme0n1p1" 00:30:42.461 }, 00:30:42.461 { 00:30:42.461 "nbd_device": "/dev/nbd1", 00:30:42.461 "bdev_name": "Nvme0n1p2" 00:30:42.461 } 00:30:42.461 ]' 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:42.461 { 00:30:42.461 "nbd_device": "/dev/nbd0", 00:30:42.461 "bdev_name": "Nvme0n1p1" 00:30:42.461 }, 00:30:42.461 { 00:30:42.461 "nbd_device": "/dev/nbd1", 00:30:42.461 "bdev_name": "Nvme0n1p2" 00:30:42.461 } 00:30:42.461 ]' 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@51 -- # local i 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:42.461 22:38:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:42.719 22:38:41 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@41 -- # break 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@45 -- # return 0 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:42.977 22:38:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@41 -- # break 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@45 -- # return 0 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:43.236 22:38:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@65 -- # true 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@65 -- # count=0 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@122 -- # count=0 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@127 -- # return 0 00:30:43.494 22:38:42 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@12 -- # local i 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:43.494 22:38:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:30:43.756 /dev/nbd0 00:30:43.756 22:38:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:43.756 22:38:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:43.756 22:38:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:43.756 22:38:42 -- common/autotest_common.sh@857 -- # local i 00:30:43.756 22:38:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:43.756 22:38:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:43.756 22:38:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:43.756 22:38:42 -- common/autotest_common.sh@861 -- # break 00:30:43.756 22:38:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:43.756 22:38:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:43.756 22:38:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:43.756 1+0 records in 00:30:43.756 1+0 records out 00:30:43.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003839 s, 10.7 MB/s 00:30:43.756 22:38:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:43.756 22:38:42 -- common/autotest_common.sh@874 -- # size=4096 00:30:43.756 22:38:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:43.756 22:38:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:43.756 22:38:42 -- common/autotest_common.sh@877 -- # return 0 00:30:43.756 22:38:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:43.756 22:38:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:43.756 22:38:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:30:44.024 /dev/nbd1 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:44.024 22:38:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:44.024 22:38:42 -- common/autotest_common.sh@857 -- # local i 00:30:44.024 22:38:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:44.024 22:38:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:44.024 22:38:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:44.024 22:38:42 -- common/autotest_common.sh@861 -- # break 00:30:44.024 22:38:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:44.024 22:38:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:44.024 22:38:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:44.024 1+0 records in 00:30:44.024 1+0 records out 00:30:44.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571651 s, 7.2 MB/s 00:30:44.024 22:38:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:44.024 22:38:42 -- common/autotest_common.sh@874 -- # size=4096 00:30:44.024 22:38:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:44.024 22:38:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:44.024 22:38:42 -- common/autotest_common.sh@877 -- # return 0 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:44.024 22:38:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:44.290 { 00:30:44.290 "nbd_device": "/dev/nbd0", 00:30:44.290 "bdev_name": "Nvme0n1p1" 00:30:44.290 }, 00:30:44.290 { 00:30:44.290 "nbd_device": "/dev/nbd1", 00:30:44.290 "bdev_name": "Nvme0n1p2" 00:30:44.290 } 00:30:44.290 ]' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:44.290 { 00:30:44.290 "nbd_device": "/dev/nbd0", 00:30:44.290 "bdev_name": "Nvme0n1p1" 00:30:44.290 }, 00:30:44.290 { 00:30:44.290 "nbd_device": "/dev/nbd1", 00:30:44.290 "bdev_name": "Nvme0n1p2" 00:30:44.290 } 00:30:44.290 ]' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:44.290 /dev/nbd1' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:44.290 /dev/nbd1' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@65 -- # count=2 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@95 -- # count=2 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:44.290 256+0 records in 00:30:44.290 256+0 records out 00:30:44.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360444 s, 291 MB/s 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:44.290 256+0 records in 00:30:44.290 256+0 records out 00:30:44.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105613 s, 9.9 MB/s 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:44.290 22:38:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:44.550 256+0 records in 00:30:44.550 256+0 records out 00:30:44.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0827147 s, 12.7 MB/s 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@51 -- # local i 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.550 22:38:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@41 -- # break 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@45 -- # return 0 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.809 22:38:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:45.068 22:38:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@41 -- # break 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@45 -- # return 0 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:45.327 22:38:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:45.585 22:38:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@65 -- # true 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@65 -- # count=0 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@104 -- # count=0 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@109 -- # return 0 00:30:45.586 22:38:44 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:45.586 22:38:44 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:45.844 malloc_lvol_verify 00:30:45.844 22:38:44 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:46.102 3fbde584-1816-4e44-82b5-a48e6b05b4f6 00:30:46.102 22:38:45 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:46.359 09cfd5f6-c28a-479f-b906-137256e58765 00:30:46.359 22:38:45 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:46.618 /dev/nbd0 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:46.618 mke2fs 1.45.5 (07-Jan-2020) 00:30:46.618 00:30:46.618 Filesystem too small for a journal 00:30:46.618 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:46.618 00:30:46.618 Allocating group tables: 0/1 done 00:30:46.618 Writing inode tables: 0/1 done 00:30:46.618 Writing superblocks and filesystem accounting information: 0/1 done 00:30:46.618 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@51 -- # local i 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:46.618 22:38:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@41 -- # break 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@45 -- # return 0 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:46.876 22:38:45 -- bdev/nbd_common.sh@147 -- # return 0 00:30:46.876 22:38:45 -- bdev/blockdev.sh@324 -- # killprocess 142198 00:30:46.876 22:38:45 -- common/autotest_common.sh@926 -- # '[' -z 142198 ']' 00:30:46.876 22:38:45 -- common/autotest_common.sh@930 -- # kill -0 142198 00:30:46.876 22:38:45 -- common/autotest_common.sh@931 -- # uname 00:30:46.876 22:38:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:46.876 22:38:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142198 00:30:46.876 22:38:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:46.876 killing process with pid 142198 00:30:46.876 22:38:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:46.876 22:38:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142198' 00:30:46.877 22:38:45 -- common/autotest_common.sh@945 -- # kill 142198 00:30:46.877 22:38:45 -- common/autotest_common.sh@950 -- # wait 142198 00:30:47.812 22:38:46 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:47.812 ************************************ 00:30:47.812 END TEST bdev_nbd 00:30:47.812 ************************************ 00:30:47.812 00:30:47.812 real 0m7.083s 00:30:47.812 user 0m10.213s 00:30:47.812 sys 0m1.490s 00:30:47.812 22:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.812 22:38:46 -- common/autotest_common.sh@10 -- # set +x 00:30:47.812 22:38:46 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:47.812 22:38:46 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:30:47.812 skipping fio tests on NVMe due to multi-ns failures. 00:30:47.812 22:38:46 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:30:47.812 22:38:46 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:47.812 22:38:46 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:47.812 22:38:46 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:47.812 22:38:46 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:47.812 22:38:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.812 22:38:46 -- common/autotest_common.sh@10 -- # set +x 00:30:47.812 ************************************ 00:30:47.812 START TEST bdev_verify 00:30:47.812 ************************************ 00:30:47.812 22:38:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:47.812 [2024-07-14 22:38:46.852453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:47.812 [2024-07-14 22:38:46.852590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142479 ] 00:30:48.070 [2024-07-14 22:38:47.008594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:48.329 [2024-07-14 22:38:47.167347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.329 [2024-07-14 22:38:47.167356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.587 Running I/O for 5 seconds... 00:30:53.856 00:30:53.856 Latency(us) 00:30:53.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.856 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:53.856 Verification LBA range: start 0x0 length 0x4ff80 00:30:53.856 Nvme0n1p1 : 5.01 5729.49 22.38 0.00 0.00 22282.27 908.57 21686.46 00:30:53.856 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:53.856 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:53.856 Nvme0n1p1 : 5.01 5722.58 22.35 0.00 0.00 22311.85 1586.27 24188.74 00:30:53.856 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:53.856 Verification LBA range: start 0x0 length 0x4ff7f 00:30:53.856 Nvme0n1p2 : 5.02 5728.17 22.38 0.00 0.00 22267.58 886.23 22043.93 00:30:53.856 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:53.856 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:53.856 Nvme0n1p2 : 5.02 5727.55 22.37 0.00 0.00 22264.83 975.59 24427.05 00:30:53.856 =================================================================================================================== 00:30:53.856 Total : 22907.79 89.48 0.00 0.00 22281.62 886.23 24427.05 00:30:55.229 00:30:55.229 real 0m7.173s 00:30:55.229 user 0m13.317s 00:30:55.229 sys 0m0.225s 00:30:55.229 ************************************ 00:30:55.229 END TEST bdev_verify 00:30:55.229 ************************************ 00:30:55.229 22:38:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.229 22:38:53 -- common/autotest_common.sh@10 -- # set +x 00:30:55.229 22:38:54 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:55.229 22:38:54 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:55.229 22:38:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.229 22:38:54 -- common/autotest_common.sh@10 -- # set +x 00:30:55.229 ************************************ 00:30:55.229 START TEST bdev_verify_big_io 00:30:55.229 ************************************ 00:30:55.229 22:38:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:55.229 [2024-07-14 22:38:54.095985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:55.229 [2024-07-14 22:38:54.096186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142586 ] 00:30:55.229 [2024-07-14 22:38:54.268475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:55.486 [2024-07-14 22:38:54.426132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.486 [2024-07-14 22:38:54.426173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.053 Running I/O for 5 seconds... 00:31:01.321 00:31:01.321 Latency(us) 00:31:01.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.321 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:01.321 Verification LBA range: start 0x0 length 0x4ff8 00:31:01.321 Nvme0n1p1 : 5.10 1097.07 68.57 0.00 0.00 115107.40 20971.52 165865.66 00:31:01.321 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:01.321 Verification LBA range: start 0x4ff8 length 0x4ff8 00:31:01.321 Nvme0n1p1 : 5.12 1301.37 81.34 0.00 0.00 96697.93 3604.48 141081.13 00:31:01.321 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:01.321 Verification LBA range: start 0x0 length 0x4ff7 00:31:01.321 Nvme0n1p2 : 5.10 1112.52 69.53 0.00 0.00 112751.45 759.62 124875.87 00:31:01.321 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:01.321 Verification LBA range: start 0x4ff7 length 0x4ff7 00:31:01.321 Nvme0n1p2 : 5.11 1302.04 81.38 0.00 0.00 97492.23 2978.91 150613.64 00:31:01.321 =================================================================================================================== 00:31:01.321 Total : 4813.01 300.81 0.00 0.00 104807.14 759.62 165865.66 00:31:02.698 00:31:02.698 real 0m7.320s 00:31:02.698 user 0m13.537s 00:31:02.698 sys 0m0.274s 00:31:02.698 22:39:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:02.698 22:39:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.698 ************************************ 00:31:02.698 END TEST bdev_verify_big_io 00:31:02.698 ************************************ 00:31:02.698 22:39:01 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:02.698 22:39:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:02.698 22:39:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:02.698 22:39:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.698 ************************************ 00:31:02.698 START TEST bdev_write_zeroes 00:31:02.698 ************************************ 00:31:02.698 22:39:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:02.698 [2024-07-14 22:39:01.480705] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:02.698 [2024-07-14 22:39:01.481942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142710 ] 00:31:02.698 [2024-07-14 22:39:01.647770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.956 [2024-07-14 22:39:01.811635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.214 Running I/O for 1 seconds... 00:31:04.146 00:31:04.146 Latency(us) 00:31:04.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.146 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:04.146 Nvme0n1p1 : 1.01 28810.93 112.54 0.00 0.00 4434.05 2278.87 23712.12 00:31:04.146 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:04.146 Nvme0n1p2 : 1.01 28795.19 112.48 0.00 0.00 4429.96 2129.92 14179.61 00:31:04.146 =================================================================================================================== 00:31:04.146 Total : 57606.12 225.02 0.00 0.00 4432.00 2129.92 23712.12 00:31:05.079 00:31:05.079 real 0m2.650s 00:31:05.079 user 0m2.292s 00:31:05.079 sys 0m0.258s 00:31:05.079 22:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.079 22:39:04 -- common/autotest_common.sh@10 -- # set +x 00:31:05.079 ************************************ 00:31:05.079 END TEST bdev_write_zeroes 00:31:05.079 ************************************ 00:31:05.079 22:39:04 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:05.079 22:39:04 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:05.079 22:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:05.079 22:39:04 -- common/autotest_common.sh@10 -- # set +x 00:31:05.079 ************************************ 00:31:05.079 START TEST bdev_json_nonenclosed 00:31:05.079 ************************************ 00:31:05.079 22:39:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:05.338 [2024-07-14 22:39:04.178916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:05.338 [2024-07-14 22:39:04.179160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142760 ] 00:31:05.338 [2024-07-14 22:39:04.343934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.596 [2024-07-14 22:39:04.519894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.596 [2024-07-14 22:39:04.520082] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:05.596 [2024-07-14 22:39:04.520126] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:05.854 00:31:05.854 real 0m0.737s 00:31:05.854 user 0m0.528s 00:31:05.854 sys 0m0.108s 00:31:05.854 22:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.854 ************************************ 00:31:05.854 END TEST bdev_json_nonenclosed 00:31:05.854 ************************************ 00:31:05.854 22:39:04 -- common/autotest_common.sh@10 -- # set +x 00:31:05.854 22:39:04 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:05.854 22:39:04 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:05.854 22:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:05.854 22:39:04 -- common/autotest_common.sh@10 -- # set +x 00:31:05.854 ************************************ 00:31:05.854 START TEST bdev_json_nonarray 00:31:05.854 ************************************ 00:31:05.854 22:39:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:06.112 [2024-07-14 22:39:04.963623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:06.112 [2024-07-14 22:39:04.964046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142798 ] 00:31:06.112 [2024-07-14 22:39:05.133460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.370 [2024-07-14 22:39:05.317134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.370 [2024-07-14 22:39:05.317363] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:06.370 [2024-07-14 22:39:05.317425] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:06.628 00:31:06.628 real 0m0.752s 00:31:06.628 user 0m0.538s 00:31:06.628 sys 0m0.114s 00:31:06.628 22:39:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.628 ************************************ 00:31:06.628 END TEST bdev_json_nonarray 00:31:06.628 ************************************ 00:31:06.628 22:39:05 -- common/autotest_common.sh@10 -- # set +x 00:31:06.628 22:39:05 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:31:06.628 22:39:05 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:31:06.628 22:39:05 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:31:06.628 22:39:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:06.628 22:39:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:06.628 22:39:05 -- common/autotest_common.sh@10 -- # set +x 00:31:06.887 ************************************ 00:31:06.887 START TEST bdev_gpt_uuid 00:31:06.887 ************************************ 00:31:06.887 22:39:05 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:31:06.887 22:39:05 -- bdev/blockdev.sh@612 -- # local bdev 00:31:06.887 22:39:05 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:31:06.887 22:39:05 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=142853 00:31:06.887 22:39:05 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:06.887 22:39:05 -- bdev/blockdev.sh@47 -- # waitforlisten 142853 00:31:06.887 22:39:05 -- common/autotest_common.sh@819 -- # '[' -z 142853 ']' 00:31:06.887 22:39:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.887 22:39:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:06.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.887 22:39:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.887 22:39:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:06.887 22:39:05 -- common/autotest_common.sh@10 -- # set +x 00:31:06.887 22:39:05 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:06.887 [2024-07-14 22:39:05.790787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:06.887 [2024-07-14 22:39:05.791233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142853 ] 00:31:06.887 [2024-07-14 22:39:05.958286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.146 [2024-07-14 22:39:06.132912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:07.146 [2024-07-14 22:39:06.133162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.523 22:39:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:08.523 22:39:07 -- common/autotest_common.sh@852 -- # return 0 00:31:08.523 22:39:07 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:08.523 22:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.523 22:39:07 -- common/autotest_common.sh@10 -- # set +x 00:31:08.523 Some configs were skipped because the RPC state that can call them passed over. 00:31:08.523 22:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.523 22:39:07 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:31:08.523 22:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.523 22:39:07 -- common/autotest_common.sh@10 -- # set +x 00:31:08.523 22:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.523 22:39:07 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:31:08.523 22:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.523 22:39:07 -- common/autotest_common.sh@10 -- # set +x 00:31:08.523 22:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.523 22:39:07 -- bdev/blockdev.sh@619 -- # bdev='[ 00:31:08.523 { 00:31:08.523 "name": "Nvme0n1p1", 00:31:08.523 "aliases": [ 00:31:08.523 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:31:08.523 ], 00:31:08.523 "product_name": "GPT Disk", 00:31:08.523 "block_size": 4096, 00:31:08.523 "num_blocks": 655104, 00:31:08.523 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:31:08.523 "assigned_rate_limits": { 00:31:08.523 "rw_ios_per_sec": 0, 00:31:08.523 "rw_mbytes_per_sec": 0, 00:31:08.523 "r_mbytes_per_sec": 0, 00:31:08.523 "w_mbytes_per_sec": 0 00:31:08.523 }, 00:31:08.523 "claimed": false, 00:31:08.523 "zoned": false, 00:31:08.523 "supported_io_types": { 00:31:08.523 "read": true, 00:31:08.523 "write": true, 00:31:08.523 "unmap": true, 00:31:08.523 "write_zeroes": true, 00:31:08.523 "flush": true, 00:31:08.523 "reset": true, 00:31:08.523 "compare": true, 00:31:08.523 "compare_and_write": false, 00:31:08.523 "abort": true, 00:31:08.523 "nvme_admin": false, 00:31:08.523 "nvme_io": false 00:31:08.523 }, 00:31:08.523 "driver_specific": { 00:31:08.523 "gpt": { 00:31:08.523 "base_bdev": "Nvme0n1", 00:31:08.523 "offset_blocks": 256, 00:31:08.523 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:31:08.523 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:31:08.523 "partition_name": "SPDK_TEST_first" 00:31:08.523 } 00:31:08.523 } 00:31:08.523 } 00:31:08.523 ]' 00:31:08.523 22:39:07 -- bdev/blockdev.sh@620 -- # jq -r length 00:31:08.782 22:39:07 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:31:08.782 22:39:07 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:31:08.782 22:39:07 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:08.782 22:39:07 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:08.782 22:39:07 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:08.782 22:39:07 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:08.782 22:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.782 22:39:07 -- common/autotest_common.sh@10 -- # set +x 00:31:08.782 22:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.782 22:39:07 -- bdev/blockdev.sh@624 -- # bdev='[ 00:31:08.782 { 00:31:08.782 "name": "Nvme0n1p2", 00:31:08.782 "aliases": [ 00:31:08.782 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:31:08.782 ], 00:31:08.782 "product_name": "GPT Disk", 00:31:08.782 "block_size": 4096, 00:31:08.782 "num_blocks": 655103, 00:31:08.782 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:08.782 "assigned_rate_limits": { 00:31:08.782 "rw_ios_per_sec": 0, 00:31:08.782 "rw_mbytes_per_sec": 0, 00:31:08.782 "r_mbytes_per_sec": 0, 00:31:08.782 "w_mbytes_per_sec": 0 00:31:08.782 }, 00:31:08.782 "claimed": false, 00:31:08.782 "zoned": false, 00:31:08.782 "supported_io_types": { 00:31:08.782 "read": true, 00:31:08.782 "write": true, 00:31:08.782 "unmap": true, 00:31:08.782 "write_zeroes": true, 00:31:08.782 "flush": true, 00:31:08.782 "reset": true, 00:31:08.782 "compare": true, 00:31:08.782 "compare_and_write": false, 00:31:08.782 "abort": true, 00:31:08.782 "nvme_admin": false, 00:31:08.782 "nvme_io": false 00:31:08.782 }, 00:31:08.782 "driver_specific": { 00:31:08.782 "gpt": { 00:31:08.782 "base_bdev": "Nvme0n1", 00:31:08.782 "offset_blocks": 655360, 00:31:08.782 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:31:08.782 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:08.782 "partition_name": "SPDK_TEST_second" 00:31:08.782 } 00:31:08.782 } 00:31:08.782 } 00:31:08.782 ]' 00:31:08.782 22:39:07 -- bdev/blockdev.sh@625 -- # jq -r length 00:31:08.782 22:39:07 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:31:08.782 22:39:07 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:31:08.782 22:39:07 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:08.782 22:39:07 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:09.040 22:39:07 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:09.040 22:39:07 -- bdev/blockdev.sh@629 -- # killprocess 142853 00:31:09.040 22:39:07 -- common/autotest_common.sh@926 -- # '[' -z 142853 ']' 00:31:09.040 22:39:07 -- common/autotest_common.sh@930 -- # kill -0 142853 00:31:09.040 22:39:07 -- common/autotest_common.sh@931 -- # uname 00:31:09.040 22:39:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:09.040 22:39:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142853 00:31:09.040 22:39:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:09.040 killing process with pid 142853 00:31:09.040 22:39:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:09.040 22:39:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142853' 00:31:09.040 22:39:07 -- common/autotest_common.sh@945 -- # kill 142853 00:31:09.040 22:39:07 -- common/autotest_common.sh@950 -- # wait 142853 00:31:10.944 00:31:10.944 real 0m3.978s 00:31:10.944 user 0m4.422s 00:31:10.944 sys 0m0.487s 00:31:10.944 22:39:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.944 ************************************ 00:31:10.944 END TEST bdev_gpt_uuid 00:31:10.944 ************************************ 00:31:10.944 22:39:09 -- common/autotest_common.sh@10 -- # set +x 00:31:10.944 22:39:09 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:31:10.944 22:39:09 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:31:10.944 22:39:09 -- bdev/blockdev.sh@809 -- # cleanup 00:31:10.944 22:39:09 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:10.944 22:39:09 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:10.944 22:39:09 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:31:10.944 22:39:09 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:31:10.944 22:39:09 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:31:10.944 22:39:09 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:10.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:11.203 Waiting for block devices as requested 00:31:11.203 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:11.203 22:39:10 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:31:11.203 22:39:10 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:31:11.203 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:31:11.203 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:31:11.203 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:31:11.203 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:31:11.203 ************************************ 00:31:11.203 END TEST blockdev_nvme_gpt 00:31:11.203 ************************************ 00:31:11.203 22:39:10 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:31:11.203 00:31:11.203 real 0m43.392s 00:31:11.203 user 1m0.997s 00:31:11.203 sys 0m6.114s 00:31:11.203 22:39:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.203 22:39:10 -- common/autotest_common.sh@10 -- # set +x 00:31:11.203 22:39:10 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:11.203 22:39:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:11.204 22:39:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:11.204 22:39:10 -- common/autotest_common.sh@10 -- # set +x 00:31:11.204 ************************************ 00:31:11.204 START TEST nvme 00:31:11.204 ************************************ 00:31:11.204 22:39:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:11.463 * Looking for test storage... 00:31:11.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:11.463 22:39:10 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:11.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:11.981 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:12.921 22:39:11 -- nvme/nvme.sh@79 -- # uname 00:31:12.921 22:39:11 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:31:12.921 22:39:11 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:31:12.921 22:39:11 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:31:12.921 22:39:11 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:31:12.921 22:39:11 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:31:12.921 22:39:11 -- common/autotest_common.sh@1045 -- # echo 0 00:31:12.921 22:39:11 -- common/autotest_common.sh@1047 -- # stubpid=143299 00:31:12.921 Waiting for stub to ready for secondary processes... 00:31:12.921 22:39:11 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:31:12.921 22:39:11 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:31:12.921 22:39:11 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:12.921 22:39:11 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143299 ]] 00:31:12.921 22:39:11 -- common/autotest_common.sh@1052 -- # sleep 1s 00:31:12.921 [2024-07-14 22:39:11.908141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:12.921 [2024-07-14 22:39:11.908347] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.898 22:39:12 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:13.898 22:39:12 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143299 ]] 00:31:13.898 22:39:12 -- common/autotest_common.sh@1052 -- # sleep 1s 00:31:14.155 [2024-07-14 22:39:13.163995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:14.413 [2024-07-14 22:39:13.332656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.413 [2024-07-14 22:39:13.332652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.413 [2024-07-14 22:39:13.332513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.413 [2024-07-14 22:39:13.348362] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:31:14.413 [2024-07-14 22:39:13.357748] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:31:14.413 [2024-07-14 22:39:13.358385] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:31:14.978 22:39:13 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:14.978 done. 00:31:14.978 22:39:13 -- common/autotest_common.sh@1054 -- # echo done. 00:31:14.978 22:39:13 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:31:14.978 22:39:13 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:31:14.978 22:39:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:14.978 22:39:13 -- common/autotest_common.sh@10 -- # set +x 00:31:14.978 ************************************ 00:31:14.978 START TEST nvme_reset 00:31:14.978 ************************************ 00:31:14.978 22:39:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:31:15.236 Initializing NVMe Controllers 00:31:15.236 Skipping QEMU NVMe SSD at 0000:00:06.0 00:31:15.236 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:31:15.236 00:31:15.236 real 0m0.285s 00:31:15.236 user 0m0.097s 00:31:15.236 sys 0m0.118s 00:31:15.236 22:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.236 ************************************ 00:31:15.236 22:39:14 -- common/autotest_common.sh@10 -- # set +x 00:31:15.236 END TEST nvme_reset 00:31:15.236 ************************************ 00:31:15.236 22:39:14 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:31:15.236 22:39:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:15.236 22:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:15.236 22:39:14 -- common/autotest_common.sh@10 -- # set +x 00:31:15.236 ************************************ 00:31:15.236 START TEST nvme_identify 00:31:15.236 ************************************ 00:31:15.236 22:39:14 -- common/autotest_common.sh@1104 -- # nvme_identify 00:31:15.236 22:39:14 -- nvme/nvme.sh@12 -- # bdfs=() 00:31:15.236 22:39:14 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:31:15.236 22:39:14 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:31:15.236 22:39:14 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:31:15.236 22:39:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:15.236 22:39:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:15.236 22:39:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:15.236 22:39:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:15.236 22:39:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:15.236 22:39:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:15.236 22:39:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:15.236 22:39:14 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:31:15.495 [2024-07-14 22:39:14.529272] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 143338 terminated unexpected 00:31:15.495 ===================================================== 00:31:15.495 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:15.495 ===================================================== 00:31:15.495 Controller Capabilities/Features 00:31:15.495 ================================ 00:31:15.495 Vendor ID: 1b36 00:31:15.495 Subsystem Vendor ID: 1af4 00:31:15.495 Serial Number: 12340 00:31:15.495 Model Number: QEMU NVMe Ctrl 00:31:15.495 Firmware Version: 8.0.0 00:31:15.495 Recommended Arb Burst: 6 00:31:15.495 IEEE OUI Identifier: 00 54 52 00:31:15.495 Multi-path I/O 00:31:15.495 May have multiple subsystem ports: No 00:31:15.495 May have multiple controllers: No 00:31:15.495 Associated with SR-IOV VF: No 00:31:15.495 Max Data Transfer Size: 524288 00:31:15.495 Max Number of Namespaces: 256 00:31:15.495 Max Number of I/O Queues: 64 00:31:15.495 NVMe Specification Version (VS): 1.4 00:31:15.495 NVMe Specification Version (Identify): 1.4 00:31:15.495 Maximum Queue Entries: 2048 00:31:15.495 Contiguous Queues Required: Yes 00:31:15.495 Arbitration Mechanisms Supported 00:31:15.495 Weighted Round Robin: Not Supported 00:31:15.495 Vendor Specific: Not Supported 00:31:15.495 Reset Timeout: 7500 ms 00:31:15.495 Doorbell Stride: 4 bytes 00:31:15.495 NVM Subsystem Reset: Not Supported 00:31:15.495 Command Sets Supported 00:31:15.495 NVM Command Set: Supported 00:31:15.495 Boot Partition: Not Supported 00:31:15.495 Memory Page Size Minimum: 4096 bytes 00:31:15.495 Memory Page Size Maximum: 65536 bytes 00:31:15.495 Persistent Memory Region: Not Supported 00:31:15.495 Optional Asynchronous Events Supported 00:31:15.495 Namespace Attribute Notices: Supported 00:31:15.495 Firmware Activation Notices: Not Supported 00:31:15.495 ANA Change Notices: Not Supported 00:31:15.495 PLE Aggregate Log Change Notices: Not Supported 00:31:15.495 LBA Status Info Alert Notices: Not Supported 00:31:15.495 EGE Aggregate Log Change Notices: Not Supported 00:31:15.495 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.495 Zone Descriptor Change Notices: Not Supported 00:31:15.495 Discovery Log Change Notices: Not Supported 00:31:15.495 Controller Attributes 00:31:15.495 128-bit Host Identifier: Not Supported 00:31:15.495 Non-Operational Permissive Mode: Not Supported 00:31:15.495 NVM Sets: Not Supported 00:31:15.495 Read Recovery Levels: Not Supported 00:31:15.495 Endurance Groups: Not Supported 00:31:15.495 Predictable Latency Mode: Not Supported 00:31:15.495 Traffic Based Keep ALive: Not Supported 00:31:15.495 Namespace Granularity: Not Supported 00:31:15.495 SQ Associations: Not Supported 00:31:15.495 UUID List: Not Supported 00:31:15.495 Multi-Domain Subsystem: Not Supported 00:31:15.495 Fixed Capacity Management: Not Supported 00:31:15.495 Variable Capacity Management: Not Supported 00:31:15.495 Delete Endurance Group: Not Supported 00:31:15.495 Delete NVM Set: Not Supported 00:31:15.495 Extended LBA Formats Supported: Supported 00:31:15.495 Flexible Data Placement Supported: Not Supported 00:31:15.495 00:31:15.495 Controller Memory Buffer Support 00:31:15.495 ================================ 00:31:15.495 Supported: No 00:31:15.495 00:31:15.495 Persistent Memory Region Support 00:31:15.495 ================================ 00:31:15.495 Supported: No 00:31:15.495 00:31:15.495 Admin Command Set Attributes 00:31:15.495 ============================ 00:31:15.495 Security Send/Receive: Not Supported 00:31:15.495 Format NVM: Supported 00:31:15.495 Firmware Activate/Download: Not Supported 00:31:15.495 Namespace Management: Supported 00:31:15.495 Device Self-Test: Not Supported 00:31:15.495 Directives: Supported 00:31:15.495 NVMe-MI: Not Supported 00:31:15.495 Virtualization Management: Not Supported 00:31:15.495 Doorbell Buffer Config: Supported 00:31:15.495 Get LBA Status Capability: Not Supported 00:31:15.495 Command & Feature Lockdown Capability: Not Supported 00:31:15.495 Abort Command Limit: 4 00:31:15.495 Async Event Request Limit: 4 00:31:15.495 Number of Firmware Slots: N/A 00:31:15.495 Firmware Slot 1 Read-Only: N/A 00:31:15.495 Firmware Activation Without Reset: N/A 00:31:15.495 Multiple Update Detection Support: N/A 00:31:15.495 Firmware Update Granularity: No Information Provided 00:31:15.495 Per-Namespace SMART Log: Yes 00:31:15.495 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.495 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:31:15.495 Command Effects Log Page: Supported 00:31:15.495 Get Log Page Extended Data: Supported 00:31:15.495 Telemetry Log Pages: Not Supported 00:31:15.495 Persistent Event Log Pages: Not Supported 00:31:15.495 Supported Log Pages Log Page: May Support 00:31:15.495 Commands Supported & Effects Log Page: Not Supported 00:31:15.495 Feature Identifiers & Effects Log Page:May Support 00:31:15.495 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.495 Data Area 4 for Telemetry Log: Not Supported 00:31:15.495 Error Log Page Entries Supported: 1 00:31:15.495 Keep Alive: Not Supported 00:31:15.495 00:31:15.495 NVM Command Set Attributes 00:31:15.495 ========================== 00:31:15.495 Submission Queue Entry Size 00:31:15.495 Max: 64 00:31:15.495 Min: 64 00:31:15.495 Completion Queue Entry Size 00:31:15.495 Max: 16 00:31:15.495 Min: 16 00:31:15.495 Number of Namespaces: 256 00:31:15.495 Compare Command: Supported 00:31:15.495 Write Uncorrectable Command: Not Supported 00:31:15.495 Dataset Management Command: Supported 00:31:15.495 Write Zeroes Command: Supported 00:31:15.495 Set Features Save Field: Supported 00:31:15.495 Reservations: Not Supported 00:31:15.495 Timestamp: Supported 00:31:15.495 Copy: Supported 00:31:15.495 Volatile Write Cache: Present 00:31:15.496 Atomic Write Unit (Normal): 1 00:31:15.496 Atomic Write Unit (PFail): 1 00:31:15.496 Atomic Compare & Write Unit: 1 00:31:15.496 Fused Compare & Write: Not Supported 00:31:15.496 Scatter-Gather List 00:31:15.496 SGL Command Set: Supported 00:31:15.496 SGL Keyed: Not Supported 00:31:15.496 SGL Bit Bucket Descriptor: Not Supported 00:31:15.496 SGL Metadata Pointer: Not Supported 00:31:15.496 Oversized SGL: Not Supported 00:31:15.496 SGL Metadata Address: Not Supported 00:31:15.496 SGL Offset: Not Supported 00:31:15.496 Transport SGL Data Block: Not Supported 00:31:15.496 Replay Protected Memory Block: Not Supported 00:31:15.496 00:31:15.496 Firmware Slot Information 00:31:15.496 ========================= 00:31:15.496 Active slot: 1 00:31:15.496 Slot 1 Firmware Revision: 1.0 00:31:15.496 00:31:15.496 00:31:15.496 Commands Supported and Effects 00:31:15.496 ============================== 00:31:15.496 Admin Commands 00:31:15.496 -------------- 00:31:15.496 Delete I/O Submission Queue (00h): Supported 00:31:15.496 Create I/O Submission Queue (01h): Supported 00:31:15.496 Get Log Page (02h): Supported 00:31:15.496 Delete I/O Completion Queue (04h): Supported 00:31:15.496 Create I/O Completion Queue (05h): Supported 00:31:15.496 Identify (06h): Supported 00:31:15.496 Abort (08h): Supported 00:31:15.496 Set Features (09h): Supported 00:31:15.496 Get Features (0Ah): Supported 00:31:15.496 Asynchronous Event Request (0Ch): Supported 00:31:15.496 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:15.496 Directive Send (19h): Supported 00:31:15.496 Directive Receive (1Ah): Supported 00:31:15.496 Virtualization Management (1Ch): Supported 00:31:15.496 Doorbell Buffer Config (7Ch): Supported 00:31:15.496 Format NVM (80h): Supported LBA-Change 00:31:15.496 I/O Commands 00:31:15.496 ------------ 00:31:15.496 Flush (00h): Supported LBA-Change 00:31:15.496 Write (01h): Supported LBA-Change 00:31:15.496 Read (02h): Supported 00:31:15.496 Compare (05h): Supported 00:31:15.496 Write Zeroes (08h): Supported LBA-Change 00:31:15.496 Dataset Management (09h): Supported LBA-Change 00:31:15.496 Unknown (0Ch): Supported 00:31:15.496 Unknown (12h): Supported 00:31:15.496 Copy (19h): Supported LBA-Change 00:31:15.496 Unknown (1Dh): Supported LBA-Change 00:31:15.496 00:31:15.496 Error Log 00:31:15.496 ========= 00:31:15.496 00:31:15.496 Arbitration 00:31:15.496 =========== 00:31:15.496 Arbitration Burst: no limit 00:31:15.496 00:31:15.496 Power Management 00:31:15.496 ================ 00:31:15.496 Number of Power States: 1 00:31:15.496 Current Power State: Power State #0 00:31:15.496 Power State #0: 00:31:15.496 Max Power: 25.00 W 00:31:15.496 Non-Operational State: Operational 00:31:15.496 Entry Latency: 16 microseconds 00:31:15.496 Exit Latency: 4 microseconds 00:31:15.496 Relative Read Throughput: 0 00:31:15.496 Relative Read Latency: 0 00:31:15.496 Relative Write Throughput: 0 00:31:15.496 Relative Write Latency: 0 00:31:15.753 Idle Power: Not Reported 00:31:15.753 Active Power: Not Reported 00:31:15.753 Non-Operational Permissive Mode: Not Supported 00:31:15.753 00:31:15.753 Health Information 00:31:15.753 ================== 00:31:15.753 Critical Warnings: 00:31:15.753 Available Spare Space: OK 00:31:15.753 Temperature: OK 00:31:15.753 Device Reliability: OK 00:31:15.753 Read Only: No 00:31:15.753 Volatile Memory Backup: OK 00:31:15.753 Current Temperature: 323 Kelvin (50 Celsius) 00:31:15.753 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:15.753 Available Spare: 0% 00:31:15.753 Available Spare Threshold: 0% 00:31:15.753 Life Percentage Used: 0% 00:31:15.754 Data Units Read: 8059 00:31:15.754 Data Units Written: 3914 00:31:15.754 Host Read Commands: 302070 00:31:15.754 Host Write Commands: 166060 00:31:15.754 Controller Busy Time: 0 minutes 00:31:15.754 Power Cycles: 0 00:31:15.754 Power On Hours: 0 hours 00:31:15.754 Unsafe Shutdowns: 0 00:31:15.754 Unrecoverable Media Errors: 0 00:31:15.754 Lifetime Error Log Entries: 0 00:31:15.754 Warning Temperature Time: 0 minutes 00:31:15.754 Critical Temperature Time: 0 minutes 00:31:15.754 00:31:15.754 Number of Queues 00:31:15.754 ================ 00:31:15.754 Number of I/O Submission Queues: 64 00:31:15.754 Number of I/O Completion Queues: 64 00:31:15.754 00:31:15.754 ZNS Specific Controller Data 00:31:15.754 ============================ 00:31:15.754 Zone Append Size Limit: 0 00:31:15.754 00:31:15.754 00:31:15.754 Active Namespaces 00:31:15.754 ================= 00:31:15.754 Namespace ID:1 00:31:15.754 Error Recovery Timeout: Unlimited 00:31:15.754 Command Set Identifier: NVM (00h) 00:31:15.754 Deallocate: Supported 00:31:15.754 Deallocated/Unwritten Error: Supported 00:31:15.754 Deallocated Read Value: All 0x00 00:31:15.754 Deallocate in Write Zeroes: Not Supported 00:31:15.754 Deallocated Guard Field: 0xFFFF 00:31:15.754 Flush: Supported 00:31:15.754 Reservation: Not Supported 00:31:15.754 Namespace Sharing Capabilities: Private 00:31:15.754 Size (in LBAs): 1310720 (5GiB) 00:31:15.754 Capacity (in LBAs): 1310720 (5GiB) 00:31:15.754 Utilization (in LBAs): 1310720 (5GiB) 00:31:15.754 Thin Provisioning: Not Supported 00:31:15.754 Per-NS Atomic Units: No 00:31:15.754 Maximum Single Source Range Length: 128 00:31:15.754 Maximum Copy Length: 128 00:31:15.754 Maximum Source Range Count: 128 00:31:15.754 NGUID/EUI64 Never Reused: No 00:31:15.754 Namespace Write Protected: No 00:31:15.754 Number of LBA Formats: 8 00:31:15.754 Current LBA Format: LBA Format #04 00:31:15.754 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.754 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.754 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.754 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.754 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.754 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.754 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.754 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.754 00:31:15.754 22:39:14 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:31:15.754 22:39:14 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:31:16.012 ===================================================== 00:31:16.012 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:16.012 ===================================================== 00:31:16.012 Controller Capabilities/Features 00:31:16.012 ================================ 00:31:16.012 Vendor ID: 1b36 00:31:16.012 Subsystem Vendor ID: 1af4 00:31:16.012 Serial Number: 12340 00:31:16.012 Model Number: QEMU NVMe Ctrl 00:31:16.012 Firmware Version: 8.0.0 00:31:16.012 Recommended Arb Burst: 6 00:31:16.012 IEEE OUI Identifier: 00 54 52 00:31:16.012 Multi-path I/O 00:31:16.012 May have multiple subsystem ports: No 00:31:16.012 May have multiple controllers: No 00:31:16.012 Associated with SR-IOV VF: No 00:31:16.012 Max Data Transfer Size: 524288 00:31:16.012 Max Number of Namespaces: 256 00:31:16.012 Max Number of I/O Queues: 64 00:31:16.012 NVMe Specification Version (VS): 1.4 00:31:16.012 NVMe Specification Version (Identify): 1.4 00:31:16.012 Maximum Queue Entries: 2048 00:31:16.012 Contiguous Queues Required: Yes 00:31:16.012 Arbitration Mechanisms Supported 00:31:16.012 Weighted Round Robin: Not Supported 00:31:16.012 Vendor Specific: Not Supported 00:31:16.012 Reset Timeout: 7500 ms 00:31:16.012 Doorbell Stride: 4 bytes 00:31:16.012 NVM Subsystem Reset: Not Supported 00:31:16.012 Command Sets Supported 00:31:16.012 NVM Command Set: Supported 00:31:16.012 Boot Partition: Not Supported 00:31:16.012 Memory Page Size Minimum: 4096 bytes 00:31:16.012 Memory Page Size Maximum: 65536 bytes 00:31:16.012 Persistent Memory Region: Not Supported 00:31:16.012 Optional Asynchronous Events Supported 00:31:16.012 Namespace Attribute Notices: Supported 00:31:16.012 Firmware Activation Notices: Not Supported 00:31:16.012 ANA Change Notices: Not Supported 00:31:16.012 PLE Aggregate Log Change Notices: Not Supported 00:31:16.012 LBA Status Info Alert Notices: Not Supported 00:31:16.012 EGE Aggregate Log Change Notices: Not Supported 00:31:16.012 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.012 Zone Descriptor Change Notices: Not Supported 00:31:16.012 Discovery Log Change Notices: Not Supported 00:31:16.012 Controller Attributes 00:31:16.012 128-bit Host Identifier: Not Supported 00:31:16.012 Non-Operational Permissive Mode: Not Supported 00:31:16.012 NVM Sets: Not Supported 00:31:16.012 Read Recovery Levels: Not Supported 00:31:16.012 Endurance Groups: Not Supported 00:31:16.012 Predictable Latency Mode: Not Supported 00:31:16.012 Traffic Based Keep ALive: Not Supported 00:31:16.012 Namespace Granularity: Not Supported 00:31:16.012 SQ Associations: Not Supported 00:31:16.012 UUID List: Not Supported 00:31:16.012 Multi-Domain Subsystem: Not Supported 00:31:16.012 Fixed Capacity Management: Not Supported 00:31:16.012 Variable Capacity Management: Not Supported 00:31:16.012 Delete Endurance Group: Not Supported 00:31:16.012 Delete NVM Set: Not Supported 00:31:16.013 Extended LBA Formats Supported: Supported 00:31:16.013 Flexible Data Placement Supported: Not Supported 00:31:16.013 00:31:16.013 Controller Memory Buffer Support 00:31:16.013 ================================ 00:31:16.013 Supported: No 00:31:16.013 00:31:16.013 Persistent Memory Region Support 00:31:16.013 ================================ 00:31:16.013 Supported: No 00:31:16.013 00:31:16.013 Admin Command Set Attributes 00:31:16.013 ============================ 00:31:16.013 Security Send/Receive: Not Supported 00:31:16.013 Format NVM: Supported 00:31:16.013 Firmware Activate/Download: Not Supported 00:31:16.013 Namespace Management: Supported 00:31:16.013 Device Self-Test: Not Supported 00:31:16.013 Directives: Supported 00:31:16.013 NVMe-MI: Not Supported 00:31:16.013 Virtualization Management: Not Supported 00:31:16.013 Doorbell Buffer Config: Supported 00:31:16.013 Get LBA Status Capability: Not Supported 00:31:16.013 Command & Feature Lockdown Capability: Not Supported 00:31:16.013 Abort Command Limit: 4 00:31:16.013 Async Event Request Limit: 4 00:31:16.013 Number of Firmware Slots: N/A 00:31:16.013 Firmware Slot 1 Read-Only: N/A 00:31:16.013 Firmware Activation Without Reset: N/A 00:31:16.013 Multiple Update Detection Support: N/A 00:31:16.013 Firmware Update Granularity: No Information Provided 00:31:16.013 Per-Namespace SMART Log: Yes 00:31:16.013 Asymmetric Namespace Access Log Page: Not Supported 00:31:16.013 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:31:16.013 Command Effects Log Page: Supported 00:31:16.013 Get Log Page Extended Data: Supported 00:31:16.013 Telemetry Log Pages: Not Supported 00:31:16.013 Persistent Event Log Pages: Not Supported 00:31:16.013 Supported Log Pages Log Page: May Support 00:31:16.013 Commands Supported & Effects Log Page: Not Supported 00:31:16.013 Feature Identifiers & Effects Log Page:May Support 00:31:16.013 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.013 Data Area 4 for Telemetry Log: Not Supported 00:31:16.013 Error Log Page Entries Supported: 1 00:31:16.013 Keep Alive: Not Supported 00:31:16.013 00:31:16.013 NVM Command Set Attributes 00:31:16.013 ========================== 00:31:16.013 Submission Queue Entry Size 00:31:16.013 Max: 64 00:31:16.013 Min: 64 00:31:16.013 Completion Queue Entry Size 00:31:16.013 Max: 16 00:31:16.013 Min: 16 00:31:16.013 Number of Namespaces: 256 00:31:16.013 Compare Command: Supported 00:31:16.013 Write Uncorrectable Command: Not Supported 00:31:16.013 Dataset Management Command: Supported 00:31:16.013 Write Zeroes Command: Supported 00:31:16.013 Set Features Save Field: Supported 00:31:16.013 Reservations: Not Supported 00:31:16.013 Timestamp: Supported 00:31:16.013 Copy: Supported 00:31:16.013 Volatile Write Cache: Present 00:31:16.013 Atomic Write Unit (Normal): 1 00:31:16.013 Atomic Write Unit (PFail): 1 00:31:16.013 Atomic Compare & Write Unit: 1 00:31:16.013 Fused Compare & Write: Not Supported 00:31:16.013 Scatter-Gather List 00:31:16.013 SGL Command Set: Supported 00:31:16.013 SGL Keyed: Not Supported 00:31:16.013 SGL Bit Bucket Descriptor: Not Supported 00:31:16.013 SGL Metadata Pointer: Not Supported 00:31:16.013 Oversized SGL: Not Supported 00:31:16.013 SGL Metadata Address: Not Supported 00:31:16.013 SGL Offset: Not Supported 00:31:16.013 Transport SGL Data Block: Not Supported 00:31:16.013 Replay Protected Memory Block: Not Supported 00:31:16.013 00:31:16.013 Firmware Slot Information 00:31:16.013 ========================= 00:31:16.013 Active slot: 1 00:31:16.013 Slot 1 Firmware Revision: 1.0 00:31:16.013 00:31:16.013 00:31:16.013 Commands Supported and Effects 00:31:16.013 ============================== 00:31:16.013 Admin Commands 00:31:16.013 -------------- 00:31:16.013 Delete I/O Submission Queue (00h): Supported 00:31:16.013 Create I/O Submission Queue (01h): Supported 00:31:16.013 Get Log Page (02h): Supported 00:31:16.013 Delete I/O Completion Queue (04h): Supported 00:31:16.013 Create I/O Completion Queue (05h): Supported 00:31:16.013 Identify (06h): Supported 00:31:16.013 Abort (08h): Supported 00:31:16.013 Set Features (09h): Supported 00:31:16.013 Get Features (0Ah): Supported 00:31:16.013 Asynchronous Event Request (0Ch): Supported 00:31:16.013 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:16.013 Directive Send (19h): Supported 00:31:16.013 Directive Receive (1Ah): Supported 00:31:16.013 Virtualization Management (1Ch): Supported 00:31:16.013 Doorbell Buffer Config (7Ch): Supported 00:31:16.013 Format NVM (80h): Supported LBA-Change 00:31:16.013 I/O Commands 00:31:16.013 ------------ 00:31:16.013 Flush (00h): Supported LBA-Change 00:31:16.013 Write (01h): Supported LBA-Change 00:31:16.013 Read (02h): Supported 00:31:16.013 Compare (05h): Supported 00:31:16.013 Write Zeroes (08h): Supported LBA-Change 00:31:16.013 Dataset Management (09h): Supported LBA-Change 00:31:16.013 Unknown (0Ch): Supported 00:31:16.013 Unknown (12h): Supported 00:31:16.013 Copy (19h): Supported LBA-Change 00:31:16.013 Unknown (1Dh): Supported LBA-Change 00:31:16.013 00:31:16.013 Error Log 00:31:16.013 ========= 00:31:16.013 00:31:16.013 Arbitration 00:31:16.013 =========== 00:31:16.013 Arbitration Burst: no limit 00:31:16.013 00:31:16.013 Power Management 00:31:16.013 ================ 00:31:16.013 Number of Power States: 1 00:31:16.013 Current Power State: Power State #0 00:31:16.013 Power State #0: 00:31:16.013 Max Power: 25.00 W 00:31:16.013 Non-Operational State: Operational 00:31:16.013 Entry Latency: 16 microseconds 00:31:16.013 Exit Latency: 4 microseconds 00:31:16.013 Relative Read Throughput: 0 00:31:16.013 Relative Read Latency: 0 00:31:16.013 Relative Write Throughput: 0 00:31:16.013 Relative Write Latency: 0 00:31:16.013 Idle Power: Not Reported 00:31:16.013 Active Power: Not Reported 00:31:16.013 Non-Operational Permissive Mode: Not Supported 00:31:16.013 00:31:16.013 Health Information 00:31:16.013 ================== 00:31:16.013 Critical Warnings: 00:31:16.013 Available Spare Space: OK 00:31:16.013 Temperature: OK 00:31:16.013 Device Reliability: OK 00:31:16.013 Read Only: No 00:31:16.013 Volatile Memory Backup: OK 00:31:16.013 Current Temperature: 323 Kelvin (50 Celsius) 00:31:16.013 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:16.013 Available Spare: 0% 00:31:16.013 Available Spare Threshold: 0% 00:31:16.013 Life Percentage Used: 0% 00:31:16.013 Data Units Read: 8059 00:31:16.013 Data Units Written: 3914 00:31:16.013 Host Read Commands: 302070 00:31:16.013 Host Write Commands: 166060 00:31:16.013 Controller Busy Time: 0 minutes 00:31:16.013 Power Cycles: 0 00:31:16.013 Power On Hours: 0 hours 00:31:16.013 Unsafe Shutdowns: 0 00:31:16.013 Unrecoverable Media Errors: 0 00:31:16.014 Lifetime Error Log Entries: 0 00:31:16.014 Warning Temperature Time: 0 minutes 00:31:16.014 Critical Temperature Time: 0 minutes 00:31:16.014 00:31:16.014 Number of Queues 00:31:16.014 ================ 00:31:16.014 Number of I/O Submission Queues: 64 00:31:16.014 Number of I/O Completion Queues: 64 00:31:16.014 00:31:16.014 ZNS Specific Controller Data 00:31:16.014 ============================ 00:31:16.014 Zone Append Size Limit: 0 00:31:16.014 00:31:16.014 00:31:16.014 Active Namespaces 00:31:16.014 ================= 00:31:16.014 Namespace ID:1 00:31:16.014 Error Recovery Timeout: Unlimited 00:31:16.014 Command Set Identifier: NVM (00h) 00:31:16.014 Deallocate: Supported 00:31:16.014 Deallocated/Unwritten Error: Supported 00:31:16.014 Deallocated Read Value: All 0x00 00:31:16.014 Deallocate in Write Zeroes: Not Supported 00:31:16.014 Deallocated Guard Field: 0xFFFF 00:31:16.014 Flush: Supported 00:31:16.014 Reservation: Not Supported 00:31:16.014 Namespace Sharing Capabilities: Private 00:31:16.014 Size (in LBAs): 1310720 (5GiB) 00:31:16.014 Capacity (in LBAs): 1310720 (5GiB) 00:31:16.014 Utilization (in LBAs): 1310720 (5GiB) 00:31:16.014 Thin Provisioning: Not Supported 00:31:16.014 Per-NS Atomic Units: No 00:31:16.014 Maximum Single Source Range Length: 128 00:31:16.014 Maximum Copy Length: 128 00:31:16.014 Maximum Source Range Count: 128 00:31:16.014 NGUID/EUI64 Never Reused: No 00:31:16.014 Namespace Write Protected: No 00:31:16.014 Number of LBA Formats: 8 00:31:16.014 Current LBA Format: LBA Format #04 00:31:16.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.014 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:16.014 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:16.014 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:16.014 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:16.014 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:16.014 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:16.014 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:16.014 00:31:16.014 00:31:16.014 real 0m0.690s 00:31:16.014 user 0m0.333s 00:31:16.014 sys 0m0.258s 00:31:16.014 22:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.014 22:39:14 -- common/autotest_common.sh@10 -- # set +x 00:31:16.014 ************************************ 00:31:16.014 END TEST nvme_identify 00:31:16.014 ************************************ 00:31:16.014 22:39:14 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:31:16.014 22:39:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.014 22:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.014 22:39:14 -- common/autotest_common.sh@10 -- # set +x 00:31:16.014 ************************************ 00:31:16.014 START TEST nvme_perf 00:31:16.014 ************************************ 00:31:16.014 22:39:14 -- common/autotest_common.sh@1104 -- # nvme_perf 00:31:16.014 22:39:14 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:31:17.390 Initializing NVMe Controllers 00:31:17.390 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:17.390 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:17.390 Initialization complete. Launching workers. 00:31:17.390 ======================================================== 00:31:17.390 Latency(us) 00:31:17.390 Device Information : IOPS MiB/s Average min max 00:31:17.390 PCIE (0000:00:06.0) NSID 1 from core 0: 56831.94 666.00 2252.54 1151.47 7068.94 00:31:17.390 ======================================================== 00:31:17.390 Total : 56831.94 666.00 2252.54 1151.47 7068.94 00:31:17.390 00:31:17.390 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:31:17.390 ================================================================================= 00:31:17.390 1.00000% : 1377.745us 00:31:17.390 10.00000% : 1571.375us 00:31:17.390 25.00000% : 1817.135us 00:31:17.390 50.00000% : 2234.182us 00:31:17.390 75.00000% : 2636.335us 00:31:17.390 90.00000% : 2904.436us 00:31:17.390 95.00000% : 3157.644us 00:31:17.390 98.00000% : 3410.851us 00:31:17.390 99.00000% : 3544.902us 00:31:17.390 99.50000% : 3798.109us 00:31:17.390 99.90000% : 5242.880us 00:31:17.390 99.99000% : 6881.280us 00:31:17.390 99.99900% : 7089.804us 00:31:17.390 99.99990% : 7089.804us 00:31:17.390 99.99999% : 7089.804us 00:31:17.390 00:31:17.390 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:31:17.390 ============================================================================== 00:31:17.390 Range in us Cumulative IO count 00:31:17.390 1146.880 - 1154.327: 0.0018% ( 1) 00:31:17.390 1161.775 - 1169.222: 0.0035% ( 1) 00:31:17.390 1176.669 - 1184.116: 0.0053% ( 1) 00:31:17.390 1184.116 - 1191.564: 0.0088% ( 2) 00:31:17.390 1199.011 - 1206.458: 0.0123% ( 2) 00:31:17.390 1213.905 - 1221.353: 0.0141% ( 1) 00:31:17.390 1221.353 - 1228.800: 0.0158% ( 1) 00:31:17.390 1236.247 - 1243.695: 0.0194% ( 2) 00:31:17.390 1251.142 - 1258.589: 0.0211% ( 1) 00:31:17.390 1258.589 - 1266.036: 0.0282% ( 4) 00:31:17.390 1266.036 - 1273.484: 0.0370% ( 5) 00:31:17.390 1273.484 - 1280.931: 0.0440% ( 4) 00:31:17.390 1280.931 - 1288.378: 0.0581% ( 8) 00:31:17.390 1288.378 - 1295.825: 0.0774% ( 11) 00:31:17.390 1295.825 - 1303.273: 0.1126% ( 20) 00:31:17.390 1303.273 - 1310.720: 0.1478% ( 20) 00:31:17.390 1310.720 - 1318.167: 0.1777% ( 17) 00:31:17.390 1318.167 - 1325.615: 0.2217% ( 25) 00:31:17.390 1325.615 - 1333.062: 0.2938% ( 41) 00:31:17.390 1333.062 - 1340.509: 0.3765% ( 47) 00:31:17.390 1340.509 - 1347.956: 0.4522% ( 43) 00:31:17.390 1347.956 - 1355.404: 0.5666% ( 65) 00:31:17.390 1355.404 - 1362.851: 0.7091% ( 81) 00:31:17.390 1362.851 - 1370.298: 0.8569% ( 84) 00:31:17.390 1370.298 - 1377.745: 1.0153% ( 90) 00:31:17.390 1377.745 - 1385.193: 1.2123% ( 112) 00:31:17.390 1385.193 - 1392.640: 1.4182% ( 117) 00:31:17.390 1392.640 - 1400.087: 1.6364% ( 124) 00:31:17.390 1400.087 - 1407.535: 1.8739% ( 135) 00:31:17.390 1407.535 - 1414.982: 2.1326% ( 147) 00:31:17.390 1414.982 - 1422.429: 2.4036% ( 154) 00:31:17.390 1422.429 - 1429.876: 2.7080% ( 173) 00:31:17.390 1429.876 - 1437.324: 3.0036% ( 168) 00:31:17.390 1437.324 - 1444.771: 3.3186% ( 179) 00:31:17.390 1444.771 - 1452.218: 3.6494% ( 188) 00:31:17.390 1452.218 - 1459.665: 4.0189% ( 210) 00:31:17.390 1459.665 - 1467.113: 4.3497% ( 188) 00:31:17.390 1467.113 - 1474.560: 4.7438% ( 224) 00:31:17.390 1474.560 - 1482.007: 5.0922% ( 198) 00:31:17.390 1482.007 - 1489.455: 5.4811% ( 221) 00:31:17.390 1489.455 - 1496.902: 5.8682% ( 220) 00:31:17.390 1496.902 - 1504.349: 6.2975% ( 244) 00:31:17.390 1504.349 - 1511.796: 6.6741% ( 214) 00:31:17.390 1511.796 - 1519.244: 7.1069% ( 246) 00:31:17.390 1519.244 - 1526.691: 7.5046% ( 226) 00:31:17.390 1526.691 - 1534.138: 7.9603% ( 259) 00:31:17.390 1534.138 - 1541.585: 8.3668% ( 231) 00:31:17.390 1541.585 - 1549.033: 8.8014% ( 247) 00:31:17.390 1549.033 - 1556.480: 9.2624% ( 262) 00:31:17.390 1556.480 - 1563.927: 9.6864% ( 241) 00:31:17.390 1563.927 - 1571.375: 10.1246% ( 249) 00:31:17.390 1571.375 - 1578.822: 10.5768% ( 257) 00:31:17.390 1578.822 - 1586.269: 11.0272% ( 256) 00:31:17.390 1586.269 - 1593.716: 11.4425% ( 236) 00:31:17.390 1593.716 - 1601.164: 11.8789% ( 248) 00:31:17.390 1601.164 - 1608.611: 12.3223% ( 252) 00:31:17.390 1608.611 - 1616.058: 12.7851% ( 263) 00:31:17.390 1616.058 - 1623.505: 13.2056% ( 239) 00:31:17.390 1623.505 - 1630.953: 13.7018% ( 282) 00:31:17.390 1630.953 - 1638.400: 14.1382% ( 248) 00:31:17.390 1638.400 - 1645.847: 14.5781% ( 250) 00:31:17.390 1645.847 - 1653.295: 15.0320% ( 258) 00:31:17.390 1653.295 - 1660.742: 15.5071% ( 270) 00:31:17.390 1660.742 - 1668.189: 15.9540% ( 254) 00:31:17.390 1668.189 - 1675.636: 16.4203% ( 265) 00:31:17.390 1675.636 - 1683.084: 16.8602% ( 250) 00:31:17.390 1683.084 - 1690.531: 17.3107% ( 256) 00:31:17.390 1690.531 - 1697.978: 17.7576% ( 254) 00:31:17.390 1697.978 - 1705.425: 18.2274% ( 267) 00:31:17.390 1705.425 - 1712.873: 18.6708% ( 252) 00:31:17.390 1712.873 - 1720.320: 19.1213% ( 256) 00:31:17.390 1720.320 - 1727.767: 19.5788% ( 260) 00:31:17.390 1727.767 - 1735.215: 20.0274% ( 255) 00:31:17.390 1735.215 - 1742.662: 20.4885% ( 262) 00:31:17.390 1742.662 - 1750.109: 20.9407% ( 257) 00:31:17.390 1750.109 - 1757.556: 21.3929% ( 257) 00:31:17.390 1757.556 - 1765.004: 21.8521% ( 261) 00:31:17.390 1765.004 - 1772.451: 22.3008% ( 255) 00:31:17.390 1772.451 - 1779.898: 22.7565% ( 259) 00:31:17.390 1779.898 - 1787.345: 23.2211% ( 264) 00:31:17.390 1787.345 - 1794.793: 23.6662% ( 253) 00:31:17.390 1794.793 - 1802.240: 24.1396% ( 269) 00:31:17.390 1802.240 - 1809.687: 24.5777% ( 249) 00:31:17.390 1809.687 - 1817.135: 25.0246% ( 254) 00:31:17.390 1817.135 - 1824.582: 25.5068% ( 274) 00:31:17.390 1824.582 - 1832.029: 25.9537% ( 254) 00:31:17.390 1832.029 - 1839.476: 26.4024% ( 255) 00:31:17.390 1839.476 - 1846.924: 26.8616% ( 261) 00:31:17.390 1846.924 - 1854.371: 27.3015% ( 250) 00:31:17.390 1854.371 - 1861.818: 27.7836% ( 274) 00:31:17.390 1861.818 - 1869.265: 28.2165% ( 246) 00:31:17.390 1869.265 - 1876.713: 28.6863% ( 267) 00:31:17.390 1876.713 - 1884.160: 29.1438% ( 260) 00:31:17.390 1884.160 - 1891.607: 29.5907% ( 254) 00:31:17.390 1891.607 - 1899.055: 30.0640% ( 269) 00:31:17.390 1899.055 - 1906.502: 30.5444% ( 273) 00:31:17.390 1906.502 - 1921.396: 31.4295% ( 503) 00:31:17.390 1921.396 - 1936.291: 32.3251% ( 509) 00:31:17.390 1936.291 - 1951.185: 33.2524% ( 527) 00:31:17.390 1951.185 - 1966.080: 34.1551% ( 513) 00:31:17.390 1966.080 - 1980.975: 35.0155% ( 489) 00:31:17.390 1980.975 - 1995.869: 35.9217% ( 515) 00:31:17.390 1995.869 - 2010.764: 36.7698% ( 482) 00:31:17.390 2010.764 - 2025.658: 37.6900% ( 523) 00:31:17.390 2025.658 - 2040.553: 38.5839% ( 508) 00:31:17.390 2040.553 - 2055.447: 39.4637% ( 500) 00:31:17.390 2055.447 - 2070.342: 40.3910% ( 527) 00:31:17.390 2070.342 - 2085.236: 41.2778% ( 504) 00:31:17.390 2085.236 - 2100.131: 42.1928% ( 520) 00:31:17.390 2100.131 - 2115.025: 43.0831% ( 506) 00:31:17.390 2115.025 - 2129.920: 43.9928% ( 517) 00:31:17.390 2129.920 - 2144.815: 44.9096% ( 521) 00:31:17.390 2144.815 - 2159.709: 45.8263% ( 521) 00:31:17.390 2159.709 - 2174.604: 46.7290% ( 513) 00:31:17.390 2174.604 - 2189.498: 47.6105% ( 501) 00:31:17.390 2189.498 - 2204.393: 48.5220% ( 518) 00:31:17.391 2204.393 - 2219.287: 49.4422% ( 523) 00:31:17.391 2219.287 - 2234.182: 50.3678% ( 526) 00:31:17.391 2234.182 - 2249.076: 51.2599% ( 507) 00:31:17.391 2249.076 - 2263.971: 52.1537% ( 508) 00:31:17.391 2263.971 - 2278.865: 53.0845% ( 529) 00:31:17.391 2278.865 - 2293.760: 53.9714% ( 504) 00:31:17.391 2293.760 - 2308.655: 54.8705% ( 511) 00:31:17.391 2308.655 - 2323.549: 55.7872% ( 521) 00:31:17.391 2323.549 - 2338.444: 56.6811% ( 508) 00:31:17.391 2338.444 - 2353.338: 57.5996% ( 522) 00:31:17.391 2353.338 - 2368.233: 58.5128% ( 519) 00:31:17.391 2368.233 - 2383.127: 59.4120% ( 511) 00:31:17.391 2383.127 - 2398.022: 60.3129% ( 512) 00:31:17.391 2398.022 - 2412.916: 61.2542% ( 535) 00:31:17.391 2412.916 - 2427.811: 62.1762% ( 524) 00:31:17.391 2427.811 - 2442.705: 63.0648% ( 505) 00:31:17.391 2442.705 - 2457.600: 63.9956% ( 529) 00:31:17.391 2457.600 - 2472.495: 64.9124% ( 521) 00:31:17.391 2472.495 - 2487.389: 65.8326% ( 523) 00:31:17.391 2487.389 - 2502.284: 66.7230% ( 506) 00:31:17.391 2502.284 - 2517.178: 67.6538% ( 529) 00:31:17.391 2517.178 - 2532.073: 68.5793% ( 526) 00:31:17.391 2532.073 - 2546.967: 69.5066% ( 527) 00:31:17.391 2546.967 - 2561.862: 70.4198% ( 519) 00:31:17.391 2561.862 - 2576.756: 71.3260% ( 515) 00:31:17.391 2576.756 - 2591.651: 72.2375% ( 518) 00:31:17.391 2591.651 - 2606.545: 73.1665% ( 528) 00:31:17.391 2606.545 - 2621.440: 74.0921% ( 526) 00:31:17.391 2621.440 - 2636.335: 75.0106% ( 522) 00:31:17.391 2636.335 - 2651.229: 75.9097% ( 511) 00:31:17.391 2651.229 - 2666.124: 76.8300% ( 523) 00:31:17.391 2666.124 - 2681.018: 77.7326% ( 513) 00:31:17.391 2681.018 - 2695.913: 78.6634% ( 529) 00:31:17.391 2695.913 - 2710.807: 79.5854% ( 524) 00:31:17.391 2710.807 - 2725.702: 80.4934% ( 516) 00:31:17.391 2725.702 - 2740.596: 81.4154% ( 524) 00:31:17.391 2740.596 - 2755.491: 82.3216% ( 515) 00:31:17.391 2755.491 - 2770.385: 83.2330% ( 518) 00:31:17.391 2770.385 - 2785.280: 84.1322% ( 511) 00:31:17.391 2785.280 - 2800.175: 85.0366% ( 514) 00:31:17.391 2800.175 - 2815.069: 85.9058% ( 494) 00:31:17.391 2815.069 - 2829.964: 86.7451% ( 477) 00:31:17.391 2829.964 - 2844.858: 87.5475% ( 456) 00:31:17.391 2844.858 - 2859.753: 88.2830% ( 418) 00:31:17.391 2859.753 - 2874.647: 88.9992% ( 407) 00:31:17.391 2874.647 - 2889.542: 89.6590% ( 375) 00:31:17.391 2889.542 - 2904.436: 90.2397% ( 330) 00:31:17.391 2904.436 - 2919.331: 90.7746% ( 304) 00:31:17.391 2919.331 - 2934.225: 91.2514% ( 271) 00:31:17.391 2934.225 - 2949.120: 91.6737% ( 240) 00:31:17.391 2949.120 - 2964.015: 92.0450% ( 211) 00:31:17.391 2964.015 - 2978.909: 92.3916% ( 197) 00:31:17.391 2978.909 - 2993.804: 92.7083% ( 180) 00:31:17.391 2993.804 - 3008.698: 92.9899% ( 160) 00:31:17.391 3008.698 - 3023.593: 93.2626% ( 155) 00:31:17.391 3023.593 - 3038.487: 93.5248% ( 149) 00:31:17.391 3038.487 - 3053.382: 93.7518% ( 129) 00:31:17.391 3053.382 - 3068.276: 93.9629% ( 120) 00:31:17.391 3068.276 - 3083.171: 94.1600% ( 112) 00:31:17.391 3083.171 - 3098.065: 94.3500% ( 108) 00:31:17.391 3098.065 - 3112.960: 94.5471% ( 112) 00:31:17.391 3112.960 - 3127.855: 94.7389% ( 109) 00:31:17.391 3127.855 - 3142.749: 94.9113% ( 98) 00:31:17.391 3142.749 - 3157.644: 95.0926% ( 103) 00:31:17.391 3157.644 - 3172.538: 95.2773% ( 105) 00:31:17.391 3172.538 - 3187.433: 95.4621% ( 105) 00:31:17.391 3187.433 - 3202.327: 95.6398% ( 101) 00:31:17.391 3202.327 - 3217.222: 95.8140% ( 99) 00:31:17.391 3217.222 - 3232.116: 95.9864% ( 98) 00:31:17.391 3232.116 - 3247.011: 96.1589% ( 98) 00:31:17.391 3247.011 - 3261.905: 96.3401% ( 103) 00:31:17.391 3261.905 - 3276.800: 96.5090% ( 96) 00:31:17.391 3276.800 - 3291.695: 96.6920% ( 104) 00:31:17.391 3291.695 - 3306.589: 96.8732% ( 103) 00:31:17.391 3306.589 - 3321.484: 97.0474% ( 99) 00:31:17.391 3321.484 - 3336.378: 97.2287% ( 103) 00:31:17.391 3336.378 - 3351.273: 97.3958% ( 95) 00:31:17.391 3351.273 - 3366.167: 97.5595% ( 93) 00:31:17.391 3366.167 - 3381.062: 97.7196% ( 91) 00:31:17.391 3381.062 - 3395.956: 97.8709% ( 86) 00:31:17.391 3395.956 - 3410.851: 98.0258% ( 88) 00:31:17.391 3410.851 - 3425.745: 98.1683% ( 81) 00:31:17.391 3425.745 - 3440.640: 98.2967% ( 73) 00:31:17.391 3440.640 - 3455.535: 98.4287% ( 75) 00:31:17.391 3455.535 - 3470.429: 98.5501% ( 69) 00:31:17.391 3470.429 - 3485.324: 98.6698% ( 68) 00:31:17.391 3485.324 - 3500.218: 98.7736% ( 59) 00:31:17.391 3500.218 - 3515.113: 98.8686% ( 54) 00:31:17.391 3515.113 - 3530.007: 98.9495% ( 46) 00:31:17.391 3530.007 - 3544.902: 99.0164% ( 38) 00:31:17.391 3544.902 - 3559.796: 99.0762% ( 34) 00:31:17.391 3559.796 - 3574.691: 99.1325% ( 32) 00:31:17.391 3574.691 - 3589.585: 99.1836% ( 29) 00:31:17.391 3589.585 - 3604.480: 99.2293% ( 26) 00:31:17.391 3604.480 - 3619.375: 99.2645% ( 20) 00:31:17.391 3619.375 - 3634.269: 99.2962% ( 18) 00:31:17.391 3634.269 - 3649.164: 99.3208% ( 14) 00:31:17.391 3649.164 - 3664.058: 99.3507% ( 17) 00:31:17.391 3664.058 - 3678.953: 99.3718% ( 12) 00:31:17.391 3678.953 - 3693.847: 99.3947% ( 13) 00:31:17.391 3693.847 - 3708.742: 99.4158% ( 12) 00:31:17.391 3708.742 - 3723.636: 99.4387% ( 13) 00:31:17.391 3723.636 - 3738.531: 99.4545% ( 9) 00:31:17.391 3738.531 - 3753.425: 99.4651% ( 6) 00:31:17.391 3753.425 - 3768.320: 99.4809% ( 9) 00:31:17.391 3768.320 - 3783.215: 99.4968% ( 9) 00:31:17.391 3783.215 - 3798.109: 99.5126% ( 9) 00:31:17.391 3798.109 - 3813.004: 99.5284% ( 9) 00:31:17.391 3813.004 - 3842.793: 99.5548% ( 15) 00:31:17.391 3842.793 - 3872.582: 99.5812% ( 15) 00:31:17.391 3872.582 - 3902.371: 99.6059% ( 14) 00:31:17.391 3902.371 - 3932.160: 99.6270% ( 12) 00:31:17.391 3932.160 - 3961.949: 99.6463% ( 11) 00:31:17.391 3961.949 - 3991.738: 99.6657% ( 11) 00:31:17.391 3991.738 - 4021.527: 99.6886% ( 13) 00:31:17.391 4021.527 - 4051.316: 99.7097% ( 12) 00:31:17.391 4051.316 - 4081.105: 99.7290% ( 11) 00:31:17.391 4081.105 - 4110.895: 99.7466% ( 10) 00:31:17.391 4110.895 - 4140.684: 99.7660% ( 11) 00:31:17.391 4140.684 - 4170.473: 99.7836% ( 10) 00:31:17.391 4170.473 - 4200.262: 99.7994% ( 9) 00:31:17.391 4200.262 - 4230.051: 99.8117% ( 7) 00:31:17.391 4230.051 - 4259.840: 99.8223% ( 6) 00:31:17.391 4259.840 - 4289.629: 99.8311% ( 5) 00:31:17.391 4289.629 - 4319.418: 99.8346% ( 2) 00:31:17.391 4319.418 - 4349.207: 99.8399% ( 3) 00:31:17.391 4349.207 - 4378.996: 99.8452% ( 3) 00:31:17.391 4378.996 - 4408.785: 99.8487% ( 2) 00:31:17.391 4408.785 - 4438.575: 99.8540% ( 3) 00:31:17.391 4438.575 - 4468.364: 99.8575% ( 2) 00:31:17.391 4468.364 - 4498.153: 99.8592% ( 1) 00:31:17.391 4498.153 - 4527.942: 99.8610% ( 1) 00:31:17.391 4527.942 - 4557.731: 99.8628% ( 1) 00:31:17.391 4557.731 - 4587.520: 99.8645% ( 1) 00:31:17.391 4587.520 - 4617.309: 99.8663% ( 1) 00:31:17.391 4617.309 - 4647.098: 99.8680% ( 1) 00:31:17.391 4647.098 - 4676.887: 99.8698% ( 1) 00:31:17.391 4676.887 - 4706.676: 99.8716% ( 1) 00:31:17.391 4706.676 - 4736.465: 99.8733% ( 1) 00:31:17.391 4766.255 - 4796.044: 99.8751% ( 1) 00:31:17.391 4796.044 - 4825.833: 99.8768% ( 1) 00:31:17.391 4825.833 - 4855.622: 99.8786% ( 1) 00:31:17.391 4855.622 - 4885.411: 99.8803% ( 1) 00:31:17.391 4885.411 - 4915.200: 99.8821% ( 1) 00:31:17.391 4915.200 - 4944.989: 99.8839% ( 1) 00:31:17.391 4944.989 - 4974.778: 99.8856% ( 1) 00:31:17.391 4974.778 - 5004.567: 99.8874% ( 1) 00:31:17.391 5004.567 - 5034.356: 99.8891% ( 1) 00:31:17.391 5034.356 - 5064.145: 99.8909% ( 1) 00:31:17.391 5064.145 - 5093.935: 99.8927% ( 1) 00:31:17.391 5093.935 - 5123.724: 99.8944% ( 1) 00:31:17.391 5123.724 - 5153.513: 99.8962% ( 1) 00:31:17.391 5153.513 - 5183.302: 99.8979% ( 1) 00:31:17.391 5183.302 - 5213.091: 99.8997% ( 1) 00:31:17.391 5213.091 - 5242.880: 99.9015% ( 1) 00:31:17.391 5242.880 - 5272.669: 99.9032% ( 1) 00:31:17.391 5272.669 - 5302.458: 99.9050% ( 1) 00:31:17.391 5302.458 - 5332.247: 99.9067% ( 1) 00:31:17.391 5332.247 - 5362.036: 99.9085% ( 1) 00:31:17.391 5391.825 - 5421.615: 99.9103% ( 1) 00:31:17.391 5421.615 - 5451.404: 99.9120% ( 1) 00:31:17.391 5451.404 - 5481.193: 99.9138% ( 1) 00:31:17.391 5481.193 - 5510.982: 99.9155% ( 1) 00:31:17.391 5510.982 - 5540.771: 99.9173% ( 1) 00:31:17.391 5540.771 - 5570.560: 99.9191% ( 1) 00:31:17.391 5570.560 - 5600.349: 99.9208% ( 1) 00:31:17.391 5600.349 - 5630.138: 99.9226% ( 1) 00:31:17.391 5630.138 - 5659.927: 99.9243% ( 1) 00:31:17.391 5659.927 - 5689.716: 99.9261% ( 1) 00:31:17.391 5689.716 - 5719.505: 99.9279% ( 1) 00:31:17.391 5719.505 - 5749.295: 99.9296% ( 1) 00:31:17.391 5749.295 - 5779.084: 99.9314% ( 1) 00:31:17.391 5779.084 - 5808.873: 99.9331% ( 1) 00:31:17.391 5808.873 - 5838.662: 99.9349% ( 1) 00:31:17.391 5838.662 - 5868.451: 99.9367% ( 1) 00:31:17.391 5868.451 - 5898.240: 99.9384% ( 1) 00:31:17.391 5898.240 - 5928.029: 99.9402% ( 1) 00:31:17.391 5957.818 - 5987.607: 99.9437% ( 2) 00:31:17.391 5987.607 - 6017.396: 99.9455% ( 1) 00:31:17.391 6047.185 - 6076.975: 99.9472% ( 1) 00:31:17.391 6076.975 - 6106.764: 99.9490% ( 1) 00:31:17.391 6106.764 - 6136.553: 99.9507% ( 1) 00:31:17.391 6136.553 - 6166.342: 99.9525% ( 1) 00:31:17.391 6166.342 - 6196.131: 99.9543% ( 1) 00:31:17.391 6196.131 - 6225.920: 99.9560% ( 1) 00:31:17.391 6225.920 - 6255.709: 99.9578% ( 1) 00:31:17.391 6255.709 - 6285.498: 99.9595% ( 1) 00:31:17.391 6285.498 - 6315.287: 99.9613% ( 1) 00:31:17.391 6345.076 - 6374.865: 99.9630% ( 1) 00:31:17.391 6374.865 - 6404.655: 99.9648% ( 1) 00:31:17.391 6404.655 - 6434.444: 99.9666% ( 1) 00:31:17.391 6434.444 - 6464.233: 99.9683% ( 1) 00:31:17.391 6464.233 - 6494.022: 99.9701% ( 1) 00:31:17.391 6494.022 - 6523.811: 99.9718% ( 1) 00:31:17.391 6523.811 - 6553.600: 99.9736% ( 1) 00:31:17.391 6553.600 - 6583.389: 99.9754% ( 1) 00:31:17.391 6583.389 - 6613.178: 99.9771% ( 1) 00:31:17.391 6613.178 - 6642.967: 99.9789% ( 1) 00:31:17.391 6642.967 - 6672.756: 99.9806% ( 1) 00:31:17.391 6702.545 - 6732.335: 99.9824% ( 1) 00:31:17.391 6732.335 - 6762.124: 99.9842% ( 1) 00:31:17.391 6762.124 - 6791.913: 99.9859% ( 1) 00:31:17.391 6791.913 - 6821.702: 99.9877% ( 1) 00:31:17.391 6821.702 - 6851.491: 99.9894% ( 1) 00:31:17.391 6851.491 - 6881.280: 99.9912% ( 1) 00:31:17.391 6881.280 - 6911.069: 99.9930% ( 1) 00:31:17.391 6911.069 - 6940.858: 99.9947% ( 1) 00:31:17.391 6940.858 - 6970.647: 99.9965% ( 1) 00:31:17.391 6970.647 - 7000.436: 99.9982% ( 1) 00:31:17.391 7060.015 - 7089.804: 100.0000% ( 1) 00:31:17.391 00:31:17.391 22:39:16 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:31:18.763 Initializing NVMe Controllers 00:31:18.763 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:18.764 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:18.764 Initialization complete. Launching workers. 00:31:18.764 ======================================================== 00:31:18.764 Latency(us) 00:31:18.764 Device Information : IOPS MiB/s Average min max 00:31:18.764 PCIE (0000:00:06.0) NSID 1 from core 0: 63856.67 748.32 2007.58 813.49 9452.78 00:31:18.764 ======================================================== 00:31:18.764 Total : 63856.67 748.32 2007.58 813.49 9452.78 00:31:18.764 00:31:18.764 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:31:18.764 ================================================================================= 00:31:18.764 1.00000% : 1325.615us 00:31:18.764 10.00000% : 1556.480us 00:31:18.764 25.00000% : 1697.978us 00:31:18.764 50.00000% : 1884.160us 00:31:18.764 75.00000% : 2159.709us 00:31:18.764 90.00000% : 2666.124us 00:31:18.764 95.00000% : 3068.276us 00:31:18.764 98.00000% : 3470.429us 00:31:18.764 99.00000% : 3723.636us 00:31:18.764 99.50000% : 3961.949us 00:31:18.764 99.90000% : 5630.138us 00:31:18.764 99.99000% : 9353.775us 00:31:18.764 99.99900% : 9472.931us 00:31:18.764 99.99990% : 9472.931us 00:31:18.764 99.99999% : 9472.931us 00:31:18.764 00:31:18.764 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:31:18.764 ============================================================================== 00:31:18.764 Range in us Cumulative IO count 00:31:18.764 811.753 - 815.476: 0.0016% ( 1) 00:31:18.764 960.698 - 968.145: 0.0094% ( 5) 00:31:18.764 968.145 - 975.593: 0.0109% ( 1) 00:31:18.764 975.593 - 983.040: 0.0141% ( 2) 00:31:18.764 983.040 - 990.487: 0.0156% ( 1) 00:31:18.764 990.487 - 997.935: 0.0188% ( 2) 00:31:18.764 1005.382 - 1012.829: 0.0235% ( 3) 00:31:18.764 1012.829 - 1020.276: 0.0266% ( 2) 00:31:18.764 1020.276 - 1027.724: 0.0297% ( 2) 00:31:18.764 1027.724 - 1035.171: 0.0313% ( 1) 00:31:18.764 1035.171 - 1042.618: 0.0344% ( 2) 00:31:18.764 1042.618 - 1050.065: 0.0360% ( 1) 00:31:18.764 1050.065 - 1057.513: 0.0391% ( 2) 00:31:18.764 1057.513 - 1064.960: 0.0454% ( 4) 00:31:18.764 1064.960 - 1072.407: 0.0501% ( 3) 00:31:18.764 1072.407 - 1079.855: 0.0547% ( 3) 00:31:18.764 1079.855 - 1087.302: 0.0579% ( 2) 00:31:18.764 1094.749 - 1102.196: 0.0626% ( 3) 00:31:18.764 1102.196 - 1109.644: 0.0641% ( 1) 00:31:18.764 1109.644 - 1117.091: 0.0704% ( 4) 00:31:18.764 1117.091 - 1124.538: 0.0813% ( 7) 00:31:18.764 1124.538 - 1131.985: 0.0876% ( 4) 00:31:18.764 1131.985 - 1139.433: 0.1017% ( 9) 00:31:18.764 1139.433 - 1146.880: 0.1064% ( 3) 00:31:18.764 1146.880 - 1154.327: 0.1126% ( 4) 00:31:18.764 1154.327 - 1161.775: 0.1345% ( 14) 00:31:18.764 1161.775 - 1169.222: 0.1955% ( 39) 00:31:18.764 1169.222 - 1176.669: 0.2096% ( 9) 00:31:18.764 1176.669 - 1184.116: 0.2221% ( 8) 00:31:18.764 1184.116 - 1191.564: 0.2346% ( 8) 00:31:18.764 1191.564 - 1199.011: 0.2503% ( 10) 00:31:18.764 1199.011 - 1206.458: 0.2690% ( 12) 00:31:18.764 1206.458 - 1213.905: 0.2815% ( 8) 00:31:18.764 1213.905 - 1221.353: 0.3019% ( 13) 00:31:18.764 1221.353 - 1228.800: 0.3222% ( 13) 00:31:18.764 1228.800 - 1236.247: 0.3378% ( 10) 00:31:18.764 1236.247 - 1243.695: 0.3707% ( 21) 00:31:18.764 1243.695 - 1251.142: 0.3910% ( 13) 00:31:18.764 1251.142 - 1258.589: 0.4098% ( 12) 00:31:18.764 1258.589 - 1266.036: 0.4364% ( 17) 00:31:18.764 1266.036 - 1273.484: 0.5068% ( 45) 00:31:18.764 1273.484 - 1280.931: 0.5709% ( 41) 00:31:18.764 1280.931 - 1288.378: 0.6538% ( 53) 00:31:18.764 1288.378 - 1295.825: 0.7711% ( 75) 00:31:18.764 1295.825 - 1303.273: 0.8509% ( 51) 00:31:18.764 1303.273 - 1310.720: 0.8900% ( 25) 00:31:18.764 1310.720 - 1318.167: 0.9479% ( 37) 00:31:18.764 1318.167 - 1325.615: 1.0323% ( 54) 00:31:18.764 1325.615 - 1333.062: 1.1058% ( 47) 00:31:18.764 1333.062 - 1340.509: 1.1637% ( 37) 00:31:18.764 1340.509 - 1347.956: 1.2779% ( 73) 00:31:18.764 1347.956 - 1355.404: 1.4797% ( 129) 00:31:18.764 1355.404 - 1362.851: 1.6251% ( 93) 00:31:18.764 1362.851 - 1370.298: 1.8394% ( 137) 00:31:18.764 1370.298 - 1377.745: 1.9942% ( 99) 00:31:18.764 1377.745 - 1385.193: 2.2070% ( 136) 00:31:18.764 1385.193 - 1392.640: 2.3743% ( 107) 00:31:18.764 1392.640 - 1400.087: 2.6074% ( 149) 00:31:18.764 1400.087 - 1407.535: 2.8232% ( 138) 00:31:18.764 1407.535 - 1414.982: 3.0923% ( 172) 00:31:18.764 1414.982 - 1422.429: 3.3832% ( 186) 00:31:18.764 1422.429 - 1429.876: 3.6006% ( 139) 00:31:18.764 1429.876 - 1437.324: 3.7601% ( 102) 00:31:18.764 1437.324 - 1444.771: 3.9463% ( 119) 00:31:18.764 1444.771 - 1452.218: 4.1777% ( 148) 00:31:18.764 1452.218 - 1459.665: 4.5453% ( 235) 00:31:18.764 1459.665 - 1467.113: 4.8972% ( 225) 00:31:18.764 1467.113 - 1474.560: 5.1772% ( 179) 00:31:18.764 1474.560 - 1482.007: 5.5401% ( 232) 00:31:18.764 1482.007 - 1489.455: 5.9296% ( 249) 00:31:18.764 1489.455 - 1496.902: 6.2439% ( 201) 00:31:18.764 1496.902 - 1504.349: 6.6803% ( 279) 00:31:18.764 1504.349 - 1511.796: 7.1105% ( 275) 00:31:18.764 1511.796 - 1519.244: 7.6235% ( 328) 00:31:18.764 1519.244 - 1526.691: 8.0442% ( 269) 00:31:18.764 1526.691 - 1534.138: 8.4853% ( 282) 00:31:18.764 1534.138 - 1541.585: 9.1375% ( 417) 00:31:18.764 1541.585 - 1549.033: 9.7053% ( 363) 00:31:18.764 1549.033 - 1556.480: 10.2043% ( 319) 00:31:18.764 1556.480 - 1563.927: 10.7439% ( 345) 00:31:18.764 1563.927 - 1571.375: 11.3038% ( 358) 00:31:18.764 1571.375 - 1578.822: 11.9467% ( 411) 00:31:18.764 1578.822 - 1586.269: 12.6724% ( 464) 00:31:18.764 1586.269 - 1593.716: 13.3951% ( 462) 00:31:18.764 1593.716 - 1601.164: 14.0035% ( 389) 00:31:18.764 1601.164 - 1608.611: 14.7715% ( 491) 00:31:18.764 1608.611 - 1616.058: 15.7256% ( 610) 00:31:18.764 1616.058 - 1623.505: 16.4983% ( 494) 00:31:18.764 1623.505 - 1630.953: 17.3460% ( 542) 00:31:18.764 1630.953 - 1638.400: 18.2845% ( 600) 00:31:18.764 1638.400 - 1645.847: 19.1729% ( 568) 00:31:18.764 1645.847 - 1653.295: 19.9393% ( 490) 00:31:18.764 1653.295 - 1660.742: 20.6838% ( 476) 00:31:18.764 1660.742 - 1668.189: 21.3799% ( 445) 00:31:18.764 1668.189 - 1675.636: 22.3136% ( 597) 00:31:18.764 1675.636 - 1683.084: 23.4570% ( 731) 00:31:18.764 1683.084 - 1690.531: 24.3329% ( 560) 00:31:18.764 1690.531 - 1697.978: 25.1384% ( 515) 00:31:18.764 1697.978 - 1705.425: 26.0706% ( 596) 00:31:18.764 1705.425 - 1712.873: 27.0748% ( 642) 00:31:18.764 1712.873 - 1720.320: 28.1196% ( 668) 00:31:18.764 1720.320 - 1727.767: 29.1050% ( 630) 00:31:18.764 1727.767 - 1735.215: 29.9027% ( 510) 00:31:18.764 1735.215 - 1742.662: 30.8224% ( 588) 00:31:18.764 1742.662 - 1750.109: 31.9282% ( 707) 00:31:18.764 1750.109 - 1757.556: 32.8526% ( 591) 00:31:18.764 1757.556 - 1765.004: 33.8896% ( 663) 00:31:18.764 1765.004 - 1772.451: 35.0752% ( 758) 00:31:18.764 1772.451 - 1779.898: 36.1498% ( 687) 00:31:18.764 1779.898 - 1787.345: 37.2196% ( 684) 00:31:18.764 1787.345 - 1794.793: 38.1299% ( 582) 00:31:18.764 1794.793 - 1802.240: 39.2952% ( 745) 00:31:18.764 1802.240 - 1809.687: 40.2931% ( 638) 00:31:18.764 1809.687 - 1817.135: 41.3192% ( 656) 00:31:18.764 1817.135 - 1824.582: 42.2576% ( 600) 00:31:18.764 1824.582 - 1832.029: 43.2946% ( 663) 00:31:18.764 1832.029 - 1839.476: 44.4787% ( 757) 00:31:18.765 1839.476 - 1846.924: 45.5814% ( 705) 00:31:18.765 1846.924 - 1854.371: 46.6747% ( 699) 00:31:18.765 1854.371 - 1861.818: 47.4865% ( 519) 00:31:18.765 1861.818 - 1869.265: 48.4609% ( 623) 00:31:18.765 1869.265 - 1876.713: 49.5151% ( 674) 00:31:18.765 1876.713 - 1884.160: 50.7351% ( 780) 00:31:18.765 1884.160 - 1891.607: 51.7721% ( 663) 00:31:18.765 1891.607 - 1899.055: 52.7544% ( 628) 00:31:18.765 1899.055 - 1906.502: 53.7679% ( 648) 00:31:18.765 1906.502 - 1921.396: 55.7278% ( 1253) 00:31:18.765 1921.396 - 1936.291: 57.5281% ( 1151) 00:31:18.765 1936.291 - 1951.185: 59.2924% ( 1128) 00:31:18.765 1951.185 - 1966.080: 61.2288% ( 1238) 00:31:18.765 1966.080 - 1980.975: 62.8789% ( 1055) 00:31:18.765 1980.975 - 1995.869: 64.4055% ( 976) 00:31:18.765 1995.869 - 2010.764: 65.5895% ( 757) 00:31:18.765 2010.764 - 2025.658: 66.9174% ( 849) 00:31:18.765 2025.658 - 2040.553: 67.9513% ( 661) 00:31:18.765 2040.553 - 2055.447: 68.8788% ( 593) 00:31:18.765 2055.447 - 2070.342: 69.8580% ( 626) 00:31:18.765 2070.342 - 2085.236: 70.8465% ( 632) 00:31:18.765 2085.236 - 2100.131: 71.7240% ( 561) 00:31:18.765 2100.131 - 2115.025: 72.6249% ( 576) 00:31:18.765 2115.025 - 2129.920: 73.4351% ( 518) 00:31:18.765 2129.920 - 2144.815: 74.1953% ( 486) 00:31:18.765 2144.815 - 2159.709: 75.0055% ( 518) 00:31:18.765 2159.709 - 2174.604: 75.9283% ( 590) 00:31:18.765 2174.604 - 2189.498: 76.5993% ( 429) 00:31:18.765 2189.498 - 2204.393: 77.3204% ( 461) 00:31:18.765 2204.393 - 2219.287: 77.9914% ( 429) 00:31:18.765 2219.287 - 2234.182: 78.6295% ( 408) 00:31:18.765 2234.182 - 2249.076: 79.1738% ( 348) 00:31:18.765 2249.076 - 2263.971: 79.7197% ( 349) 00:31:18.765 2263.971 - 2278.865: 80.2640% ( 348) 00:31:18.765 2278.865 - 2293.760: 80.7880% ( 335) 00:31:18.765 2293.760 - 2308.655: 81.3026% ( 329) 00:31:18.765 2308.655 - 2323.549: 81.8751% ( 366) 00:31:18.765 2323.549 - 2338.444: 82.3756% ( 320) 00:31:18.765 2338.444 - 2353.338: 82.8260% ( 288) 00:31:18.765 2353.338 - 2368.233: 83.2906% ( 297) 00:31:18.765 2368.233 - 2383.127: 83.7551% ( 297) 00:31:18.765 2383.127 - 2398.022: 84.2009% ( 285) 00:31:18.765 2398.022 - 2412.916: 84.6529% ( 289) 00:31:18.765 2412.916 - 2427.811: 85.0721% ( 268) 00:31:18.765 2427.811 - 2442.705: 85.4803% ( 261) 00:31:18.765 2442.705 - 2457.600: 85.8854% ( 259) 00:31:18.765 2457.600 - 2472.495: 86.2999% ( 265) 00:31:18.765 2472.495 - 2487.389: 86.6519% ( 225) 00:31:18.765 2487.389 - 2502.284: 86.9975% ( 221) 00:31:18.765 2502.284 - 2517.178: 87.3182% ( 205) 00:31:18.765 2517.178 - 2532.073: 87.6373% ( 204) 00:31:18.765 2532.073 - 2546.967: 87.9172% ( 179) 00:31:18.765 2546.967 - 2561.862: 88.1972% ( 179) 00:31:18.765 2561.862 - 2576.756: 88.4928% ( 189) 00:31:18.765 2576.756 - 2591.651: 88.7869% ( 188) 00:31:18.765 2591.651 - 2606.545: 89.0559% ( 172) 00:31:18.765 2606.545 - 2621.440: 89.3202% ( 169) 00:31:18.765 2621.440 - 2636.335: 89.6033% ( 181) 00:31:18.765 2636.335 - 2651.229: 89.8818% ( 178) 00:31:18.765 2651.229 - 2666.124: 90.1868% ( 195) 00:31:18.765 2666.124 - 2681.018: 90.4558% ( 172) 00:31:18.765 2681.018 - 2695.913: 90.7107% ( 163) 00:31:18.765 2695.913 - 2710.807: 90.9547% ( 156) 00:31:18.765 2710.807 - 2725.702: 91.1909% ( 151) 00:31:18.765 2725.702 - 2740.596: 91.4553% ( 169) 00:31:18.765 2740.596 - 2755.491: 91.6774% ( 142) 00:31:18.765 2755.491 - 2770.385: 91.8541% ( 113) 00:31:18.765 2770.385 - 2785.280: 92.0355% ( 116) 00:31:18.765 2785.280 - 2800.175: 92.1951% ( 102) 00:31:18.765 2800.175 - 2815.069: 92.3718% ( 113) 00:31:18.765 2815.069 - 2829.964: 92.5376% ( 106) 00:31:18.765 2829.964 - 2844.858: 92.7191% ( 116) 00:31:18.765 2844.858 - 2859.753: 92.9146% ( 125) 00:31:18.765 2859.753 - 2874.647: 93.0929% ( 114) 00:31:18.765 2874.647 - 2889.542: 93.2774% ( 118) 00:31:18.765 2889.542 - 2904.436: 93.4511% ( 111) 00:31:18.765 2904.436 - 2919.331: 93.6106% ( 102) 00:31:18.765 2919.331 - 2934.225: 93.7608% ( 96) 00:31:18.765 2934.225 - 2949.120: 93.9062% ( 93) 00:31:18.765 2949.120 - 2964.015: 94.0486% ( 91) 00:31:18.765 2964.015 - 2978.909: 94.1956% ( 94) 00:31:18.765 2978.909 - 2993.804: 94.3536% ( 101) 00:31:18.765 2993.804 - 3008.698: 94.4943% ( 90) 00:31:18.765 3008.698 - 3023.593: 94.6367% ( 91) 00:31:18.765 3023.593 - 3038.487: 94.7696% ( 85) 00:31:18.765 3038.487 - 3053.382: 94.9151% ( 93) 00:31:18.765 3053.382 - 3068.276: 95.0480% ( 85) 00:31:18.765 3068.276 - 3083.171: 95.1872% ( 89) 00:31:18.765 3083.171 - 3098.065: 95.3139% ( 81) 00:31:18.765 3098.065 - 3112.960: 95.4437% ( 83) 00:31:18.765 3112.960 - 3127.855: 95.5548% ( 71) 00:31:18.765 3127.855 - 3142.749: 95.6924% ( 88) 00:31:18.765 3142.749 - 3157.644: 95.8035% ( 71) 00:31:18.765 3157.644 - 3172.538: 95.9333% ( 83) 00:31:18.765 3172.538 - 3187.433: 96.0537% ( 77) 00:31:18.765 3187.433 - 3202.327: 96.1820% ( 82) 00:31:18.765 3202.327 - 3217.222: 96.3056% ( 79) 00:31:18.765 3217.222 - 3232.116: 96.4197% ( 73) 00:31:18.765 3232.116 - 3247.011: 96.5464% ( 81) 00:31:18.765 3247.011 - 3261.905: 96.6575% ( 71) 00:31:18.765 3261.905 - 3276.800: 96.7764% ( 76) 00:31:18.765 3276.800 - 3291.695: 96.8921% ( 74) 00:31:18.765 3291.695 - 3306.589: 97.0125% ( 77) 00:31:18.765 3306.589 - 3321.484: 97.1126% ( 64) 00:31:18.765 3321.484 - 3336.378: 97.2284% ( 74) 00:31:18.765 3336.378 - 3351.273: 97.3316% ( 66) 00:31:18.765 3351.273 - 3366.167: 97.4302% ( 63) 00:31:18.765 3366.167 - 3381.062: 97.5334% ( 66) 00:31:18.765 3381.062 - 3395.956: 97.6257% ( 59) 00:31:18.765 3395.956 - 3410.851: 97.7133% ( 56) 00:31:18.765 3410.851 - 3425.745: 97.8009% ( 56) 00:31:18.765 3425.745 - 3440.640: 97.8791% ( 50) 00:31:18.765 3440.640 - 3455.535: 97.9588% ( 51) 00:31:18.765 3455.535 - 3470.429: 98.0386% ( 51) 00:31:18.765 3470.429 - 3485.324: 98.1199% ( 52) 00:31:18.765 3485.324 - 3500.218: 98.1934% ( 47) 00:31:18.765 3500.218 - 3515.113: 98.2638% ( 45) 00:31:18.765 3515.113 - 3530.007: 98.3295% ( 42) 00:31:18.765 3530.007 - 3544.902: 98.3999% ( 45) 00:31:18.765 3544.902 - 3559.796: 98.4578% ( 37) 00:31:18.765 3559.796 - 3574.691: 98.5235% ( 42) 00:31:18.765 3574.691 - 3589.585: 98.5860% ( 40) 00:31:18.765 3589.585 - 3604.480: 98.6486% ( 40) 00:31:18.765 3604.480 - 3619.375: 98.7065% ( 37) 00:31:18.765 3619.375 - 3634.269: 98.7675% ( 39) 00:31:18.765 3634.269 - 3649.164: 98.8207% ( 34) 00:31:18.765 3649.164 - 3664.058: 98.8723% ( 33) 00:31:18.765 3664.058 - 3678.953: 98.9145% ( 27) 00:31:18.765 3678.953 - 3693.847: 98.9599% ( 29) 00:31:18.765 3693.847 - 3708.742: 98.9958% ( 23) 00:31:18.765 3708.742 - 3723.636: 99.0365% ( 26) 00:31:18.765 3723.636 - 3738.531: 99.0834% ( 30) 00:31:18.765 3738.531 - 3753.425: 99.1288% ( 29) 00:31:18.765 3753.425 - 3768.320: 99.1632% ( 22) 00:31:18.765 3768.320 - 3783.215: 99.2023% ( 25) 00:31:18.765 3783.215 - 3798.109: 99.2383% ( 23) 00:31:18.765 3798.109 - 3813.004: 99.2727% ( 22) 00:31:18.765 3813.004 - 3842.793: 99.3337% ( 39) 00:31:18.766 3842.793 - 3872.582: 99.3900% ( 36) 00:31:18.766 3872.582 - 3902.371: 99.4400% ( 32) 00:31:18.766 3902.371 - 3932.160: 99.4823% ( 27) 00:31:18.766 3932.160 - 3961.949: 99.5136% ( 20) 00:31:18.766 3961.949 - 3991.738: 99.5339% ( 13) 00:31:18.766 3991.738 - 4021.527: 99.5527% ( 12) 00:31:18.766 4021.527 - 4051.316: 99.5761% ( 15) 00:31:18.766 4051.316 - 4081.105: 99.5949% ( 12) 00:31:18.766 4081.105 - 4110.895: 99.6074% ( 8) 00:31:18.766 4110.895 - 4140.684: 99.6246% ( 11) 00:31:18.766 4140.684 - 4170.473: 99.6387% ( 9) 00:31:18.766 4170.473 - 4200.262: 99.6606% ( 14) 00:31:18.766 4200.262 - 4230.051: 99.6872% ( 17) 00:31:18.766 4230.051 - 4259.840: 99.7075% ( 13) 00:31:18.766 4259.840 - 4289.629: 99.7247% ( 11) 00:31:18.766 4289.629 - 4319.418: 99.7466% ( 14) 00:31:18.766 4319.418 - 4349.207: 99.7607% ( 9) 00:31:18.766 4349.207 - 4378.996: 99.7685% ( 5) 00:31:18.766 4378.996 - 4408.785: 99.7748% ( 4) 00:31:18.766 4408.785 - 4438.575: 99.7826% ( 5) 00:31:18.766 4438.575 - 4468.364: 99.7873% ( 3) 00:31:18.766 4468.364 - 4498.153: 99.7920% ( 3) 00:31:18.766 4498.153 - 4527.942: 99.7967% ( 3) 00:31:18.766 4527.942 - 4557.731: 99.8029% ( 4) 00:31:18.766 4557.731 - 4587.520: 99.8060% ( 2) 00:31:18.766 4587.520 - 4617.309: 99.8092% ( 2) 00:31:18.766 4617.309 - 4647.098: 99.8107% ( 1) 00:31:18.766 4647.098 - 4676.887: 99.8123% ( 1) 00:31:18.766 4676.887 - 4706.676: 99.8170% ( 3) 00:31:18.766 4706.676 - 4736.465: 99.8201% ( 2) 00:31:18.766 4736.465 - 4766.255: 99.8217% ( 1) 00:31:18.766 4766.255 - 4796.044: 99.8233% ( 1) 00:31:18.766 4796.044 - 4825.833: 99.8279% ( 3) 00:31:18.766 4825.833 - 4855.622: 99.8295% ( 1) 00:31:18.766 4855.622 - 4885.411: 99.8342% ( 3) 00:31:18.766 4885.411 - 4915.200: 99.8373% ( 2) 00:31:18.766 4915.200 - 4944.989: 99.8405% ( 2) 00:31:18.766 4944.989 - 4974.778: 99.8592% ( 12) 00:31:18.766 4974.778 - 5004.567: 99.8608% ( 1) 00:31:18.766 5004.567 - 5034.356: 99.8639% ( 2) 00:31:18.766 5034.356 - 5064.145: 99.8655% ( 1) 00:31:18.766 5064.145 - 5093.935: 99.8671% ( 1) 00:31:18.766 5093.935 - 5123.724: 99.8686% ( 1) 00:31:18.766 5123.724 - 5153.513: 99.8702% ( 1) 00:31:18.766 5153.513 - 5183.302: 99.8733% ( 2) 00:31:18.766 5183.302 - 5213.091: 99.8749% ( 1) 00:31:18.766 5213.091 - 5242.880: 99.8764% ( 1) 00:31:18.766 5242.880 - 5272.669: 99.8780% ( 1) 00:31:18.766 5272.669 - 5302.458: 99.8796% ( 1) 00:31:18.766 5302.458 - 5332.247: 99.8827% ( 2) 00:31:18.766 5332.247 - 5362.036: 99.8843% ( 1) 00:31:18.766 5362.036 - 5391.825: 99.8874% ( 2) 00:31:18.766 5391.825 - 5421.615: 99.8889% ( 1) 00:31:18.766 5421.615 - 5451.404: 99.8921% ( 2) 00:31:18.766 5481.193 - 5510.982: 99.8936% ( 1) 00:31:18.766 5510.982 - 5540.771: 99.8952% ( 1) 00:31:18.766 5540.771 - 5570.560: 99.8968% ( 1) 00:31:18.766 5570.560 - 5600.349: 99.8999% ( 2) 00:31:18.766 5600.349 - 5630.138: 99.9015% ( 1) 00:31:18.766 5630.138 - 5659.927: 99.9030% ( 1) 00:31:18.766 5659.927 - 5689.716: 99.9046% ( 1) 00:31:18.766 5689.716 - 5719.505: 99.9077% ( 2) 00:31:18.766 5719.505 - 5749.295: 99.9108% ( 2) 00:31:18.766 5749.295 - 5779.084: 99.9124% ( 1) 00:31:18.766 5779.084 - 5808.873: 99.9155% ( 2) 00:31:18.766 5808.873 - 5838.662: 99.9171% ( 1) 00:31:18.766 5838.662 - 5868.451: 99.9187% ( 1) 00:31:18.766 5868.451 - 5898.240: 99.9218% ( 2) 00:31:18.766 5898.240 - 5928.029: 99.9234% ( 1) 00:31:18.766 5928.029 - 5957.818: 99.9249% ( 1) 00:31:18.766 5957.818 - 5987.607: 99.9265% ( 1) 00:31:18.766 5987.607 - 6017.396: 99.9296% ( 2) 00:31:18.766 6017.396 - 6047.185: 99.9312% ( 1) 00:31:18.766 6047.185 - 6076.975: 99.9327% ( 1) 00:31:18.766 6076.975 - 6106.764: 99.9359% ( 2) 00:31:18.766 6106.764 - 6136.553: 99.9374% ( 1) 00:31:18.766 6136.553 - 6166.342: 99.9390% ( 1) 00:31:18.766 6166.342 - 6196.131: 99.9406% ( 1) 00:31:18.766 6196.131 - 6225.920: 99.9421% ( 1) 00:31:18.766 6225.920 - 6255.709: 99.9453% ( 2) 00:31:18.766 6255.709 - 6285.498: 99.9468% ( 1) 00:31:18.766 6285.498 - 6315.287: 99.9484% ( 1) 00:31:18.766 6315.287 - 6345.076: 99.9499% ( 1) 00:31:18.766 6345.076 - 6374.865: 99.9531% ( 2) 00:31:18.766 6404.655 - 6434.444: 99.9562% ( 2) 00:31:18.766 6434.444 - 6464.233: 99.9578% ( 1) 00:31:18.766 6464.233 - 6494.022: 99.9593% ( 1) 00:31:18.766 6494.022 - 6523.811: 99.9609% ( 1) 00:31:18.766 6523.811 - 6553.600: 99.9625% ( 1) 00:31:18.766 6553.600 - 6583.389: 99.9640% ( 1) 00:31:18.766 6911.069 - 6940.858: 99.9672% ( 2) 00:31:18.766 7179.171 - 7208.960: 99.9687% ( 1) 00:31:18.766 7983.476 - 8043.055: 99.9703% ( 1) 00:31:18.766 8162.211 - 8221.789: 99.9718% ( 1) 00:31:18.766 8757.993 - 8817.571: 99.9734% ( 1) 00:31:18.766 8817.571 - 8877.149: 99.9750% ( 1) 00:31:18.766 8996.305 - 9055.884: 99.9765% ( 1) 00:31:18.766 9115.462 - 9175.040: 99.9797% ( 2) 00:31:18.766 9234.618 - 9294.196: 99.9828% ( 2) 00:31:18.766 9294.196 - 9353.775: 99.9922% ( 6) 00:31:18.766 9353.775 - 9413.353: 99.9937% ( 1) 00:31:18.766 9413.353 - 9472.931: 100.0000% ( 4) 00:31:18.766 00:31:18.766 22:39:17 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:31:18.766 00:31:18.766 real 0m2.718s 00:31:18.766 user 0m2.264s 00:31:18.766 sys 0m0.295s 00:31:18.766 22:39:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.766 22:39:17 -- common/autotest_common.sh@10 -- # set +x 00:31:18.766 ************************************ 00:31:18.766 END TEST nvme_perf 00:31:18.766 ************************************ 00:31:18.766 22:39:17 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:31:18.766 22:39:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:18.766 22:39:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:18.766 22:39:17 -- common/autotest_common.sh@10 -- # set +x 00:31:18.766 ************************************ 00:31:18.766 START TEST nvme_hello_world 00:31:18.766 ************************************ 00:31:18.766 22:39:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:31:19.024 Initializing NVMe Controllers 00:31:19.024 Attached to 0000:00:06.0 00:31:19.024 Namespace ID: 1 size: 5GB 00:31:19.024 Initialization complete. 00:31:19.024 INFO: using host memory buffer for IO 00:31:19.024 Hello world! 00:31:19.024 00:31:19.024 real 0m0.326s 00:31:19.024 user 0m0.120s 00:31:19.024 sys 0m0.137s 00:31:19.024 22:39:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.024 22:39:18 -- common/autotest_common.sh@10 -- # set +x 00:31:19.024 ************************************ 00:31:19.024 END TEST nvme_hello_world 00:31:19.024 ************************************ 00:31:19.024 22:39:18 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:31:19.024 22:39:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:19.024 22:39:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:19.024 22:39:18 -- common/autotest_common.sh@10 -- # set +x 00:31:19.024 ************************************ 00:31:19.024 START TEST nvme_sgl 00:31:19.024 ************************************ 00:31:19.024 22:39:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:31:19.283 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:31:19.283 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:31:19.541 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:31:19.541 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:31:19.541 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:31:19.541 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:31:19.541 NVMe Readv/Writev Request test 00:31:19.541 Attached to 0000:00:06.0 00:31:19.541 0000:00:06.0: build_io_request_2 test passed 00:31:19.541 0000:00:06.0: build_io_request_4 test passed 00:31:19.541 0000:00:06.0: build_io_request_5 test passed 00:31:19.541 0000:00:06.0: build_io_request_6 test passed 00:31:19.541 0000:00:06.0: build_io_request_7 test passed 00:31:19.541 0000:00:06.0: build_io_request_10 test passed 00:31:19.541 Cleaning up... 00:31:19.541 00:31:19.541 real 0m0.452s 00:31:19.541 user 0m0.240s 00:31:19.541 sys 0m0.136s 00:31:19.541 22:39:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.541 22:39:18 -- common/autotest_common.sh@10 -- # set +x 00:31:19.542 ************************************ 00:31:19.542 END TEST nvme_sgl 00:31:19.542 ************************************ 00:31:19.542 22:39:18 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:31:19.542 22:39:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:19.542 22:39:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:19.542 22:39:18 -- common/autotest_common.sh@10 -- # set +x 00:31:19.542 ************************************ 00:31:19.542 START TEST nvme_e2edp 00:31:19.542 ************************************ 00:31:19.542 22:39:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:31:19.801 NVMe Write/Read with End-to-End data protection test 00:31:19.801 Attached to 0000:00:06.0 00:31:19.801 Cleaning up... 00:31:19.801 00:31:19.801 real 0m0.269s 00:31:19.801 user 0m0.073s 00:31:19.801 sys 0m0.116s 00:31:19.801 22:39:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.801 22:39:18 -- common/autotest_common.sh@10 -- # set +x 00:31:19.801 ************************************ 00:31:19.801 END TEST nvme_e2edp 00:31:19.801 ************************************ 00:31:20.060 22:39:18 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:31:20.060 22:39:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:20.060 22:39:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:20.060 22:39:18 -- common/autotest_common.sh@10 -- # set +x 00:31:20.060 ************************************ 00:31:20.060 START TEST nvme_reserve 00:31:20.060 ************************************ 00:31:20.060 22:39:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:31:20.319 ===================================================== 00:31:20.319 NVMe Controller at PCI bus 0, device 6, function 0 00:31:20.319 ===================================================== 00:31:20.319 Reservations: Not Supported 00:31:20.319 Reservation test passed 00:31:20.319 00:31:20.319 real 0m0.277s 00:31:20.319 user 0m0.079s 00:31:20.319 sys 0m0.117s 00:31:20.319 22:39:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.319 22:39:19 -- common/autotest_common.sh@10 -- # set +x 00:31:20.319 ************************************ 00:31:20.319 END TEST nvme_reserve 00:31:20.319 ************************************ 00:31:20.319 22:39:19 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:31:20.319 22:39:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:20.319 22:39:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:20.319 22:39:19 -- common/autotest_common.sh@10 -- # set +x 00:31:20.319 ************************************ 00:31:20.319 START TEST nvme_err_injection 00:31:20.319 ************************************ 00:31:20.319 22:39:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:31:20.578 NVMe Error Injection test 00:31:20.578 Attached to 0000:00:06.0 00:31:20.578 0000:00:06.0: get features failed as expected 00:31:20.578 0000:00:06.0: get features successfully as expected 00:31:20.578 0000:00:06.0: read failed as expected 00:31:20.578 0000:00:06.0: read successfully as expected 00:31:20.578 Cleaning up... 00:31:20.578 00:31:20.578 real 0m0.282s 00:31:20.578 user 0m0.098s 00:31:20.578 sys 0m0.103s 00:31:20.578 22:39:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.578 22:39:19 -- common/autotest_common.sh@10 -- # set +x 00:31:20.578 ************************************ 00:31:20.578 END TEST nvme_err_injection 00:31:20.578 ************************************ 00:31:20.578 22:39:19 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:31:20.578 22:39:19 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:31:20.578 22:39:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:20.578 22:39:19 -- common/autotest_common.sh@10 -- # set +x 00:31:20.578 ************************************ 00:31:20.578 START TEST nvme_overhead 00:31:20.578 ************************************ 00:31:20.578 22:39:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:31:21.955 Initializing NVMe Controllers 00:31:21.955 Attached to 0000:00:06.0 00:31:21.955 Initialization complete. Launching workers. 00:31:21.955 submit (in ns) avg, min, max = 13361.1, 10335.5, 117111.8 00:31:21.955 complete (in ns) avg, min, max = 10085.7, 7340.9, 95701.4 00:31:21.955 00:31:21.955 Submit histogram 00:31:21.955 ================ 00:31:21.955 Range in us Cumulative Count 00:31:21.955 10.298 - 10.356: 0.0245% ( 2) 00:31:21.955 10.356 - 10.415: 0.0856% ( 5) 00:31:21.955 10.415 - 10.473: 0.2080% ( 10) 00:31:21.955 10.473 - 10.531: 0.5015% ( 24) 00:31:21.955 10.531 - 10.589: 0.7095% ( 17) 00:31:21.955 10.589 - 10.647: 0.8073% ( 8) 00:31:21.955 10.647 - 10.705: 0.9786% ( 14) 00:31:21.955 10.705 - 10.764: 1.1009% ( 10) 00:31:21.955 10.764 - 10.822: 1.2844% ( 15) 00:31:21.955 10.822 - 10.880: 1.7615% ( 39) 00:31:21.955 10.880 - 10.938: 2.5688% ( 66) 00:31:21.955 10.938 - 10.996: 4.1101% ( 126) 00:31:21.955 10.996 - 11.055: 5.2355% ( 92) 00:31:21.955 11.055 - 11.113: 6.4343% ( 98) 00:31:21.955 11.113 - 11.171: 8.3180% ( 154) 00:31:21.955 11.171 - 11.229: 10.8624% ( 208) 00:31:21.955 11.229 - 11.287: 13.4924% ( 215) 00:31:21.955 11.287 - 11.345: 15.5107% ( 165) 00:31:21.955 11.345 - 11.404: 18.2630% ( 225) 00:31:21.955 11.404 - 11.462: 22.6055% ( 355) 00:31:21.955 11.462 - 11.520: 27.6575% ( 413) 00:31:21.955 11.520 - 11.578: 32.3914% ( 387) 00:31:21.955 11.578 - 11.636: 35.5719% ( 260) 00:31:21.955 11.636 - 11.695: 39.2416% ( 300) 00:31:21.956 11.695 - 11.753: 45.6147% ( 521) 00:31:21.956 11.753 - 11.811: 52.8685% ( 593) 00:31:21.956 11.811 - 11.869: 58.4587% ( 457) 00:31:21.956 11.869 - 11.927: 62.5810% ( 337) 00:31:21.956 11.927 - 11.985: 65.6269% ( 249) 00:31:21.956 11.985 - 12.044: 69.6269% ( 327) 00:31:21.956 12.044 - 12.102: 73.5291% ( 319) 00:31:21.956 12.102 - 12.160: 76.3792% ( 233) 00:31:21.956 12.160 - 12.218: 78.4832% ( 172) 00:31:21.956 12.218 - 12.276: 80.0856% ( 131) 00:31:21.956 12.276 - 12.335: 81.4067% ( 108) 00:31:21.956 12.335 - 12.393: 82.6789% ( 104) 00:31:21.956 12.393 - 12.451: 84.0122% ( 109) 00:31:21.956 12.451 - 12.509: 84.7462% ( 60) 00:31:21.956 12.509 - 12.567: 85.2844% ( 44) 00:31:21.956 12.567 - 12.625: 85.5535% ( 22) 00:31:21.956 12.625 - 12.684: 85.8471% ( 24) 00:31:21.956 12.684 - 12.742: 86.0917% ( 20) 00:31:21.956 12.742 - 12.800: 86.4709% ( 31) 00:31:21.956 12.800 - 12.858: 86.8379% ( 30) 00:31:21.956 12.858 - 12.916: 87.1682% ( 27) 00:31:21.956 12.916 - 12.975: 87.4740% ( 25) 00:31:21.956 12.975 - 13.033: 87.6697% ( 16) 00:31:21.956 13.033 - 13.091: 87.8165% ( 12) 00:31:21.956 13.091 - 13.149: 87.9511% ( 11) 00:31:21.956 13.149 - 13.207: 88.0367% ( 7) 00:31:21.956 13.207 - 13.265: 88.0856% ( 4) 00:31:21.956 13.265 - 13.324: 88.1468% ( 5) 00:31:21.956 13.324 - 13.382: 88.2080% ( 5) 00:31:21.956 13.382 - 13.440: 88.2813% ( 6) 00:31:21.956 13.440 - 13.498: 88.3425% ( 5) 00:31:21.956 13.498 - 13.556: 88.4281% ( 7) 00:31:21.956 13.556 - 13.615: 88.4893% ( 5) 00:31:21.956 13.615 - 13.673: 88.5015% ( 1) 00:31:21.956 13.673 - 13.731: 88.5505% ( 4) 00:31:21.956 13.731 - 13.789: 88.5994% ( 4) 00:31:21.956 13.789 - 13.847: 88.6361% ( 3) 00:31:21.956 13.847 - 13.905: 88.6972% ( 5) 00:31:21.956 13.905 - 13.964: 88.7217% ( 2) 00:31:21.956 13.964 - 14.022: 88.7339% ( 1) 00:31:21.956 14.022 - 14.080: 88.7462% ( 1) 00:31:21.956 14.080 - 14.138: 88.7584% ( 1) 00:31:21.956 14.138 - 14.196: 88.7706% ( 1) 00:31:21.956 14.313 - 14.371: 88.7829% ( 1) 00:31:21.956 14.371 - 14.429: 88.8073% ( 2) 00:31:21.956 14.429 - 14.487: 88.8318% ( 2) 00:31:21.956 14.487 - 14.545: 88.8563% ( 2) 00:31:21.956 14.545 - 14.604: 88.9052% ( 4) 00:31:21.956 14.662 - 14.720: 88.9174% ( 1) 00:31:21.956 14.778 - 14.836: 88.9419% ( 2) 00:31:21.956 14.836 - 14.895: 88.9664% ( 2) 00:31:21.956 14.895 - 15.011: 89.0398% ( 6) 00:31:21.956 15.011 - 15.127: 89.0765% ( 3) 00:31:21.956 15.127 - 15.244: 89.1498% ( 6) 00:31:21.956 15.244 - 15.360: 89.1865% ( 3) 00:31:21.956 15.360 - 15.476: 89.2232% ( 3) 00:31:21.956 15.476 - 15.593: 89.2599% ( 3) 00:31:21.956 15.593 - 15.709: 89.3089% ( 4) 00:31:21.956 15.709 - 15.825: 89.3211% ( 1) 00:31:21.956 15.825 - 15.942: 89.3578% ( 3) 00:31:21.956 15.942 - 16.058: 89.3823% ( 2) 00:31:21.956 16.058 - 16.175: 89.3945% ( 1) 00:31:21.956 16.175 - 16.291: 89.4434% ( 4) 00:31:21.956 16.291 - 16.407: 89.4801% ( 3) 00:31:21.956 16.407 - 16.524: 89.5168% ( 3) 00:31:21.956 16.524 - 16.640: 89.5535% ( 3) 00:31:21.956 16.640 - 16.756: 89.5902% ( 3) 00:31:21.956 16.756 - 16.873: 89.6024% ( 1) 00:31:21.956 16.873 - 16.989: 89.6636% ( 5) 00:31:21.956 16.989 - 17.105: 89.7003% ( 3) 00:31:21.956 17.105 - 17.222: 89.7248% ( 2) 00:31:21.956 17.222 - 17.338: 89.7737% ( 4) 00:31:21.956 17.338 - 17.455: 89.8104% ( 3) 00:31:21.956 17.455 - 17.571: 89.8593% ( 4) 00:31:21.956 17.571 - 17.687: 89.9205% ( 5) 00:31:21.956 17.687 - 17.804: 89.9694% ( 4) 00:31:21.956 17.804 - 17.920: 89.9939% ( 2) 00:31:21.956 17.920 - 18.036: 90.0306% ( 3) 00:31:21.956 18.036 - 18.153: 90.0795% ( 4) 00:31:21.956 18.269 - 18.385: 90.1407% ( 5) 00:31:21.956 18.385 - 18.502: 90.1774% ( 3) 00:31:21.956 18.502 - 18.618: 90.2018% ( 2) 00:31:21.956 18.618 - 18.735: 90.2263% ( 2) 00:31:21.956 18.735 - 18.851: 90.2997% ( 6) 00:31:21.956 18.851 - 18.967: 90.3853% ( 7) 00:31:21.956 18.967 - 19.084: 90.3976% ( 1) 00:31:21.956 19.084 - 19.200: 90.4220% ( 2) 00:31:21.956 19.200 - 19.316: 90.4465% ( 2) 00:31:21.956 19.316 - 19.433: 90.4832% ( 3) 00:31:21.956 19.433 - 19.549: 90.5321% ( 4) 00:31:21.956 19.549 - 19.665: 90.5566% ( 2) 00:31:21.956 19.665 - 19.782: 90.5933% ( 3) 00:31:21.956 19.782 - 19.898: 90.6177% ( 2) 00:31:21.956 19.898 - 20.015: 90.6544% ( 3) 00:31:21.956 20.015 - 20.131: 90.6667% ( 1) 00:31:21.956 20.247 - 20.364: 90.6911% ( 2) 00:31:21.956 20.364 - 20.480: 90.7156% ( 2) 00:31:21.956 20.480 - 20.596: 90.7523% ( 3) 00:31:21.956 20.596 - 20.713: 90.7768% ( 2) 00:31:21.956 20.713 - 20.829: 90.8012% ( 2) 00:31:21.956 20.829 - 20.945: 90.8502% ( 4) 00:31:21.956 20.945 - 21.062: 90.8746% ( 2) 00:31:21.956 21.062 - 21.178: 90.8991% ( 2) 00:31:21.956 21.178 - 21.295: 90.9235% ( 2) 00:31:21.956 21.411 - 21.527: 90.9480% ( 2) 00:31:21.956 21.527 - 21.644: 90.9602% ( 1) 00:31:21.956 21.644 - 21.760: 90.9725% ( 1) 00:31:21.956 21.760 - 21.876: 91.0459% ( 6) 00:31:21.956 21.876 - 21.993: 91.0948% ( 4) 00:31:21.956 21.993 - 22.109: 91.1193% ( 2) 00:31:21.956 22.109 - 22.225: 91.1437% ( 2) 00:31:21.956 22.225 - 22.342: 91.1682% ( 2) 00:31:21.956 22.342 - 22.458: 91.1927% ( 2) 00:31:21.956 22.458 - 22.575: 91.2416% ( 4) 00:31:21.956 22.691 - 22.807: 91.2783% ( 3) 00:31:21.956 22.807 - 22.924: 91.3028% ( 2) 00:31:21.956 22.924 - 23.040: 91.3272% ( 2) 00:31:21.956 23.040 - 23.156: 91.3394% ( 1) 00:31:21.956 23.273 - 23.389: 91.3761% ( 3) 00:31:21.956 23.389 - 23.505: 91.3884% ( 1) 00:31:21.956 23.505 - 23.622: 91.4006% ( 1) 00:31:21.956 23.622 - 23.738: 91.4251% ( 2) 00:31:21.956 23.738 - 23.855: 91.4373% ( 1) 00:31:21.956 23.855 - 23.971: 91.4495% ( 1) 00:31:21.956 24.204 - 24.320: 91.4618% ( 1) 00:31:21.956 24.320 - 24.436: 91.4740% ( 1) 00:31:21.956 24.436 - 24.553: 91.4862% ( 1) 00:31:21.956 24.669 - 24.785: 91.5107% ( 2) 00:31:21.956 24.785 - 24.902: 91.5229% ( 1) 00:31:21.956 25.018 - 25.135: 91.5474% ( 2) 00:31:21.956 25.251 - 25.367: 91.5596% ( 1) 00:31:21.956 25.367 - 25.484: 91.5841% ( 2) 00:31:21.956 25.484 - 25.600: 91.5963% ( 1) 00:31:21.956 25.600 - 25.716: 91.6697% ( 6) 00:31:21.956 25.716 - 25.833: 91.7554% ( 7) 00:31:21.956 25.833 - 25.949: 91.9144% ( 13) 00:31:21.956 25.949 - 26.065: 92.1101% ( 16) 00:31:21.956 26.065 - 26.182: 92.3425% ( 19) 00:31:21.956 26.182 - 26.298: 92.6972% ( 29) 00:31:21.956 26.298 - 26.415: 93.0642% ( 30) 00:31:21.957 26.415 - 26.531: 93.5535% ( 40) 00:31:21.957 26.531 - 26.647: 94.1040% ( 45) 00:31:21.957 26.647 - 26.764: 94.6789% ( 47) 00:31:21.957 26.764 - 26.880: 95.3761% ( 57) 00:31:21.957 26.880 - 26.996: 96.1101% ( 60) 00:31:21.957 26.996 - 27.113: 96.5627% ( 37) 00:31:21.957 27.113 - 27.229: 97.0398% ( 39) 00:31:21.957 27.229 - 27.345: 97.4190% ( 31) 00:31:21.957 27.345 - 27.462: 97.7248% ( 25) 00:31:21.957 27.462 - 27.578: 97.9205% ( 16) 00:31:21.957 27.578 - 27.695: 98.0673% ( 12) 00:31:21.957 27.695 - 27.811: 98.2875% ( 18) 00:31:21.957 27.811 - 27.927: 98.4465% ( 13) 00:31:21.957 27.927 - 28.044: 98.5076% ( 5) 00:31:21.957 28.044 - 28.160: 98.5933% ( 7) 00:31:21.957 28.160 - 28.276: 98.6544% ( 5) 00:31:21.957 28.276 - 28.393: 98.7156% ( 5) 00:31:21.957 28.393 - 28.509: 98.7768% ( 5) 00:31:21.957 28.509 - 28.625: 98.8135% ( 3) 00:31:21.957 28.625 - 28.742: 98.8257% ( 1) 00:31:21.957 28.742 - 28.858: 98.8624% ( 3) 00:31:21.957 28.858 - 28.975: 98.8746% ( 1) 00:31:21.957 28.975 - 29.091: 98.9113% ( 3) 00:31:21.957 29.091 - 29.207: 98.9358% ( 2) 00:31:21.957 29.673 - 29.789: 98.9602% ( 2) 00:31:21.957 29.789 - 30.022: 98.9725% ( 1) 00:31:21.957 30.720 - 30.953: 98.9969% ( 2) 00:31:21.957 30.953 - 31.185: 99.0092% ( 1) 00:31:21.957 31.185 - 31.418: 99.0214% ( 1) 00:31:21.957 31.418 - 31.651: 99.0459% ( 2) 00:31:21.957 31.651 - 31.884: 99.0703% ( 2) 00:31:21.957 31.884 - 32.116: 99.1070% ( 3) 00:31:21.957 32.116 - 32.349: 99.1437% ( 3) 00:31:21.957 32.349 - 32.582: 99.1682% ( 2) 00:31:21.957 32.582 - 32.815: 99.1804% ( 1) 00:31:21.957 32.815 - 33.047: 99.2171% ( 3) 00:31:21.957 33.047 - 33.280: 99.2294% ( 1) 00:31:21.957 33.280 - 33.513: 99.2661% ( 3) 00:31:21.957 33.513 - 33.745: 99.3028% ( 3) 00:31:21.957 33.978 - 34.211: 99.3150% ( 1) 00:31:21.957 34.211 - 34.444: 99.3394% ( 2) 00:31:21.957 34.444 - 34.676: 99.3884% ( 4) 00:31:21.957 35.375 - 35.607: 99.4006% ( 1) 00:31:21.957 35.607 - 35.840: 99.4128% ( 1) 00:31:21.957 35.840 - 36.073: 99.4251% ( 1) 00:31:21.957 36.305 - 36.538: 99.4495% ( 2) 00:31:21.957 36.771 - 37.004: 99.4618% ( 1) 00:31:21.957 37.469 - 37.702: 99.4740% ( 1) 00:31:21.957 37.702 - 37.935: 99.4862% ( 1) 00:31:21.957 38.400 - 38.633: 99.4985% ( 1) 00:31:21.957 38.633 - 38.865: 99.5229% ( 2) 00:31:21.957 38.865 - 39.098: 99.5474% ( 2) 00:31:21.957 39.331 - 39.564: 99.5596% ( 1) 00:31:21.957 39.564 - 39.796: 99.5719% ( 1) 00:31:21.957 39.796 - 40.029: 99.5963% ( 2) 00:31:21.957 40.262 - 40.495: 99.6086% ( 1) 00:31:21.957 40.495 - 40.727: 99.6208% ( 1) 00:31:21.957 40.727 - 40.960: 99.6453% ( 2) 00:31:21.957 41.425 - 41.658: 99.6697% ( 2) 00:31:21.957 42.124 - 42.356: 99.6820% ( 1) 00:31:21.957 42.356 - 42.589: 99.6942% ( 1) 00:31:21.957 42.589 - 42.822: 99.7064% ( 1) 00:31:21.957 42.822 - 43.055: 99.7431% ( 3) 00:31:21.957 43.520 - 43.753: 99.7676% ( 2) 00:31:21.957 43.753 - 43.985: 99.7798% ( 1) 00:31:21.957 44.218 - 44.451: 99.7920% ( 1) 00:31:21.957 47.011 - 47.244: 99.8043% ( 1) 00:31:21.957 47.244 - 47.476: 99.8165% ( 1) 00:31:21.957 48.407 - 48.640: 99.8410% ( 2) 00:31:21.957 48.640 - 48.873: 99.8532% ( 1) 00:31:21.957 49.105 - 49.338: 99.8654% ( 1) 00:31:21.957 50.269 - 50.502: 99.8777% ( 1) 00:31:21.957 54.225 - 54.458: 99.8899% ( 1) 00:31:21.957 55.622 - 55.855: 99.9021% ( 1) 00:31:21.957 56.320 - 56.553: 99.9144% ( 1) 00:31:21.957 58.415 - 58.647: 99.9266% ( 1) 00:31:21.957 58.880 - 59.113: 99.9388% ( 1) 00:31:21.957 63.302 - 63.767: 99.9511% ( 1) 00:31:21.957 63.767 - 64.233: 99.9633% ( 1) 00:31:21.957 64.233 - 64.698: 99.9755% ( 1) 00:31:21.957 70.284 - 70.749: 99.9878% ( 1) 00:31:21.957 116.829 - 117.295: 100.0000% ( 1) 00:31:21.957 00:31:21.957 Complete histogram 00:31:21.957 ================== 00:31:21.957 Range in us Cumulative Count 00:31:21.957 7.331 - 7.360: 0.0122% ( 1) 00:31:21.957 7.389 - 7.418: 0.0367% ( 2) 00:31:21.957 7.418 - 7.447: 0.1101% ( 6) 00:31:21.957 7.447 - 7.505: 0.3180% ( 17) 00:31:21.957 7.505 - 7.564: 0.5138% ( 16) 00:31:21.957 7.564 - 7.622: 0.6361% ( 10) 00:31:21.957 7.622 - 7.680: 0.8440% ( 17) 00:31:21.957 7.680 - 7.738: 1.1009% ( 21) 00:31:21.957 7.738 - 7.796: 1.9327% ( 68) 00:31:21.957 7.796 - 7.855: 3.1804% ( 102) 00:31:21.957 7.855 - 7.913: 4.0122% ( 68) 00:31:21.957 7.913 - 7.971: 5.2110% ( 98) 00:31:21.957 7.971 - 8.029: 8.2569% ( 249) 00:31:21.957 8.029 - 8.087: 11.0459% ( 228) 00:31:21.957 8.087 - 8.145: 12.3670% ( 108) 00:31:21.957 8.145 - 8.204: 15.6942% ( 272) 00:31:21.957 8.204 - 8.262: 21.6881% ( 490) 00:31:21.957 8.262 - 8.320: 25.7125% ( 329) 00:31:21.957 8.320 - 8.378: 28.7584% ( 249) 00:31:21.957 8.378 - 8.436: 35.1070% ( 519) 00:31:21.957 8.436 - 8.495: 44.3670% ( 757) 00:31:21.957 8.495 - 8.553: 49.0153% ( 380) 00:31:21.957 8.553 - 8.611: 51.7676% ( 225) 00:31:21.957 8.611 - 8.669: 57.5413% ( 472) 00:31:21.957 8.669 - 8.727: 64.6728% ( 583) 00:31:21.957 8.727 - 8.785: 68.5015% ( 313) 00:31:21.957 8.785 - 8.844: 70.8991% ( 196) 00:31:21.957 8.844 - 8.902: 74.2263% ( 272) 00:31:21.957 8.902 - 8.960: 78.3242% ( 335) 00:31:21.957 8.960 - 9.018: 81.1254% ( 229) 00:31:21.957 9.018 - 9.076: 82.2630% ( 93) 00:31:21.957 9.076 - 9.135: 83.3639% ( 90) 00:31:21.957 9.135 - 9.193: 84.9786% ( 132) 00:31:21.957 9.193 - 9.251: 86.4098% ( 117) 00:31:21.957 9.251 - 9.309: 87.4985% ( 89) 00:31:21.957 9.309 - 9.367: 88.0734% ( 47) 00:31:21.957 9.367 - 9.425: 88.4281% ( 29) 00:31:21.957 9.425 - 9.484: 88.6606% ( 19) 00:31:21.957 9.484 - 9.542: 88.8563% ( 16) 00:31:21.957 9.542 - 9.600: 88.9664% ( 9) 00:31:21.957 9.600 - 9.658: 89.1254% ( 13) 00:31:21.957 9.658 - 9.716: 89.1621% ( 3) 00:31:21.957 9.716 - 9.775: 89.2477% ( 7) 00:31:21.957 9.775 - 9.833: 89.3333% ( 7) 00:31:21.957 9.833 - 9.891: 89.3578% ( 2) 00:31:21.957 9.891 - 9.949: 89.3823% ( 2) 00:31:21.957 9.949 - 10.007: 89.4067% ( 2) 00:31:21.957 10.007 - 10.065: 89.4557% ( 4) 00:31:21.957 10.124 - 10.182: 89.4801% ( 2) 00:31:21.957 10.182 - 10.240: 89.4924% ( 1) 00:31:21.957 10.298 - 10.356: 89.5046% ( 1) 00:31:21.957 10.356 - 10.415: 89.5168% ( 1) 00:31:21.957 10.415 - 10.473: 89.5291% ( 1) 00:31:21.957 10.531 - 10.589: 89.5413% ( 1) 00:31:21.957 10.589 - 10.647: 89.5535% ( 1) 00:31:21.957 10.764 - 10.822: 89.5657% ( 1) 00:31:21.958 10.822 - 10.880: 89.5780% ( 1) 00:31:21.958 10.880 - 10.938: 89.5902% ( 1) 00:31:21.958 11.113 - 11.171: 89.6024% ( 1) 00:31:21.958 11.345 - 11.404: 89.6147% ( 1) 00:31:21.958 11.695 - 11.753: 89.6269% ( 1) 00:31:21.958 11.753 - 11.811: 89.6391% ( 1) 00:31:21.958 11.927 - 11.985: 89.6514% ( 1) 00:31:21.958 12.218 - 12.276: 89.6636% ( 1) 00:31:21.958 12.276 - 12.335: 89.6758% ( 1) 00:31:21.958 12.393 - 12.451: 89.7003% ( 2) 00:31:21.958 12.451 - 12.509: 89.7248% ( 2) 00:31:21.958 12.509 - 12.567: 89.7492% ( 2) 00:31:21.958 12.625 - 12.684: 89.7615% ( 1) 00:31:21.958 12.858 - 12.916: 89.7737% ( 1) 00:31:21.958 12.916 - 12.975: 89.7859% ( 1) 00:31:21.958 12.975 - 13.033: 89.8104% ( 2) 00:31:21.958 13.033 - 13.091: 89.8349% ( 2) 00:31:21.958 13.091 - 13.149: 89.8471% ( 1) 00:31:21.958 13.149 - 13.207: 89.8716% ( 2) 00:31:21.958 13.207 - 13.265: 89.8960% ( 2) 00:31:21.958 13.265 - 13.324: 89.9083% ( 1) 00:31:21.958 13.324 - 13.382: 89.9572% ( 4) 00:31:21.958 13.382 - 13.440: 89.9817% ( 2) 00:31:21.958 13.440 - 13.498: 90.0061% ( 2) 00:31:21.958 13.498 - 13.556: 90.0183% ( 1) 00:31:21.958 13.556 - 13.615: 90.0428% ( 2) 00:31:21.958 13.615 - 13.673: 90.0673% ( 2) 00:31:21.958 13.673 - 13.731: 90.1162% ( 4) 00:31:21.958 13.731 - 13.789: 90.1284% ( 1) 00:31:21.958 13.789 - 13.847: 90.1407% ( 1) 00:31:21.958 13.847 - 13.905: 90.1651% ( 2) 00:31:21.958 13.905 - 13.964: 90.1896% ( 2) 00:31:21.958 13.964 - 14.022: 90.2018% ( 1) 00:31:21.958 14.022 - 14.080: 90.2141% ( 1) 00:31:21.958 14.138 - 14.196: 90.2385% ( 2) 00:31:21.958 14.196 - 14.255: 90.2875% ( 4) 00:31:21.958 14.255 - 14.313: 90.3119% ( 2) 00:31:21.958 14.313 - 14.371: 90.3364% ( 2) 00:31:21.958 14.371 - 14.429: 90.3486% ( 1) 00:31:21.958 14.429 - 14.487: 90.3609% ( 1) 00:31:21.958 14.487 - 14.545: 90.3853% ( 2) 00:31:21.958 14.545 - 14.604: 90.4098% ( 2) 00:31:21.958 14.604 - 14.662: 90.4220% ( 1) 00:31:21.958 14.662 - 14.720: 90.4343% ( 1) 00:31:21.958 14.720 - 14.778: 90.4587% ( 2) 00:31:21.958 14.778 - 14.836: 90.4832% ( 2) 00:31:21.958 14.836 - 14.895: 90.4954% ( 1) 00:31:21.958 14.895 - 15.011: 90.5199% ( 2) 00:31:21.958 15.244 - 15.360: 90.5321% ( 1) 00:31:21.958 15.360 - 15.476: 90.5566% ( 2) 00:31:21.958 15.476 - 15.593: 90.6055% ( 4) 00:31:21.958 15.593 - 15.709: 90.6177% ( 1) 00:31:21.958 15.942 - 16.058: 90.6544% ( 3) 00:31:21.958 16.058 - 16.175: 90.6667% ( 1) 00:31:21.958 16.175 - 16.291: 90.6789% ( 1) 00:31:21.958 16.291 - 16.407: 90.7034% ( 2) 00:31:21.958 16.407 - 16.524: 90.7401% ( 3) 00:31:21.958 16.524 - 16.640: 90.7768% ( 3) 00:31:21.958 16.640 - 16.756: 90.7890% ( 1) 00:31:21.958 16.989 - 17.105: 90.8624% ( 6) 00:31:21.958 17.105 - 17.222: 90.8746% ( 1) 00:31:21.958 17.222 - 17.338: 90.8869% ( 1) 00:31:21.958 17.338 - 17.455: 90.9113% ( 2) 00:31:21.958 17.571 - 17.687: 90.9235% ( 1) 00:31:21.958 17.687 - 17.804: 90.9602% ( 3) 00:31:21.958 17.804 - 17.920: 90.9847% ( 2) 00:31:21.958 17.920 - 18.036: 91.0092% ( 2) 00:31:21.958 18.036 - 18.153: 91.0214% ( 1) 00:31:21.958 18.153 - 18.269: 91.0581% ( 3) 00:31:21.958 18.269 - 18.385: 91.0826% ( 2) 00:31:21.958 18.385 - 18.502: 91.1193% ( 3) 00:31:21.958 18.502 - 18.618: 91.1315% ( 1) 00:31:21.958 18.967 - 19.084: 91.1437% ( 1) 00:31:21.958 19.084 - 19.200: 91.1560% ( 1) 00:31:21.958 19.200 - 19.316: 91.1927% ( 3) 00:31:21.958 19.316 - 19.433: 91.2538% ( 5) 00:31:21.958 19.433 - 19.549: 91.2905% ( 3) 00:31:21.958 19.549 - 19.665: 91.3028% ( 1) 00:31:21.958 19.665 - 19.782: 91.3272% ( 2) 00:31:21.958 19.782 - 19.898: 91.3761% ( 4) 00:31:21.958 19.898 - 20.015: 91.4006% ( 2) 00:31:21.958 20.131 - 20.247: 91.4251% ( 2) 00:31:21.958 20.247 - 20.364: 91.4618% ( 3) 00:31:21.958 20.364 - 20.480: 91.4862% ( 2) 00:31:21.958 20.480 - 20.596: 91.5107% ( 2) 00:31:21.958 20.945 - 21.062: 91.5229% ( 1) 00:31:21.958 21.411 - 21.527: 91.5352% ( 1) 00:31:21.958 21.527 - 21.644: 91.5474% ( 1) 00:31:21.958 21.760 - 21.876: 91.5596% ( 1) 00:31:21.958 21.993 - 22.109: 91.5841% ( 2) 00:31:21.958 22.109 - 22.225: 91.6208% ( 3) 00:31:21.958 22.225 - 22.342: 91.6575% ( 3) 00:31:21.958 22.342 - 22.458: 91.7309% ( 6) 00:31:21.958 22.458 - 22.575: 91.9266% ( 16) 00:31:21.958 22.575 - 22.691: 92.1223% ( 16) 00:31:21.958 22.691 - 22.807: 92.3547% ( 19) 00:31:21.958 22.807 - 22.924: 92.5627% ( 17) 00:31:21.958 22.924 - 23.040: 93.0398% ( 39) 00:31:21.958 23.040 - 23.156: 93.4924% ( 37) 00:31:21.958 23.156 - 23.273: 94.0306% ( 44) 00:31:21.958 23.273 - 23.389: 94.5688% ( 44) 00:31:21.958 23.389 - 23.505: 95.2294% ( 54) 00:31:21.958 23.505 - 23.622: 95.9021% ( 55) 00:31:21.958 23.622 - 23.738: 96.5749% ( 55) 00:31:21.958 23.738 - 23.855: 96.9908% ( 34) 00:31:21.958 23.855 - 23.971: 97.3823% ( 32) 00:31:21.958 23.971 - 24.087: 97.6514% ( 22) 00:31:21.958 24.087 - 24.204: 97.9083% ( 21) 00:31:21.958 24.204 - 24.320: 98.1162% ( 17) 00:31:21.958 24.320 - 24.436: 98.3609% ( 20) 00:31:21.958 24.436 - 24.553: 98.5076% ( 12) 00:31:21.958 24.553 - 24.669: 98.6055% ( 8) 00:31:21.958 24.669 - 24.785: 98.7278% ( 10) 00:31:21.958 24.785 - 24.902: 98.8135% ( 7) 00:31:21.958 24.902 - 25.018: 98.8746% ( 5) 00:31:21.958 25.018 - 25.135: 98.9358% ( 5) 00:31:21.958 25.135 - 25.251: 98.9480% ( 1) 00:31:21.958 25.251 - 25.367: 98.9847% ( 3) 00:31:21.958 25.484 - 25.600: 98.9969% ( 1) 00:31:21.958 25.949 - 26.065: 99.0214% ( 2) 00:31:21.958 26.182 - 26.298: 99.0703% ( 4) 00:31:21.958 26.415 - 26.531: 99.0826% ( 1) 00:31:21.958 26.531 - 26.647: 99.0948% ( 1) 00:31:21.958 26.764 - 26.880: 99.1070% ( 1) 00:31:21.958 26.880 - 26.996: 99.1193% ( 1) 00:31:21.958 27.811 - 27.927: 99.1315% ( 1) 00:31:21.958 28.044 - 28.160: 99.1437% ( 1) 00:31:21.958 28.276 - 28.393: 99.1682% ( 2) 00:31:21.958 28.625 - 28.742: 99.2049% ( 3) 00:31:21.958 28.742 - 28.858: 99.2171% ( 1) 00:31:21.958 28.975 - 29.091: 99.2416% ( 2) 00:31:21.958 29.207 - 29.324: 99.2538% ( 1) 00:31:21.958 29.324 - 29.440: 99.2905% ( 3) 00:31:21.958 29.440 - 29.556: 99.3028% ( 1) 00:31:21.958 29.556 - 29.673: 99.3150% ( 1) 00:31:21.959 29.673 - 29.789: 99.3272% ( 1) 00:31:21.959 30.255 - 30.487: 99.3394% ( 1) 00:31:21.959 30.487 - 30.720: 99.3884% ( 4) 00:31:21.959 30.953 - 31.185: 99.4495% ( 5) 00:31:21.959 31.185 - 31.418: 99.4862% ( 3) 00:31:21.959 31.651 - 31.884: 99.4985% ( 1) 00:31:21.959 31.884 - 32.116: 99.5107% ( 1) 00:31:21.959 33.280 - 33.513: 99.5596% ( 4) 00:31:21.959 33.978 - 34.211: 99.5719% ( 1) 00:31:21.959 34.211 - 34.444: 99.5841% ( 1) 00:31:21.959 34.444 - 34.676: 99.5963% ( 1) 00:31:21.959 35.142 - 35.375: 99.6086% ( 1) 00:31:21.959 35.375 - 35.607: 99.6330% ( 2) 00:31:21.959 36.073 - 36.305: 99.6453% ( 1) 00:31:21.959 36.305 - 36.538: 99.6575% ( 1) 00:31:21.959 36.771 - 37.004: 99.6697% ( 1) 00:31:21.959 37.004 - 37.236: 99.6820% ( 1) 00:31:21.959 37.236 - 37.469: 99.6942% ( 1) 00:31:21.959 37.469 - 37.702: 99.7064% ( 1) 00:31:21.959 40.029 - 40.262: 99.7187% ( 1) 00:31:21.959 40.960 - 41.193: 99.7309% ( 1) 00:31:21.959 41.193 - 41.425: 99.7431% ( 1) 00:31:21.959 41.425 - 41.658: 99.7554% ( 1) 00:31:21.959 41.658 - 41.891: 99.7676% ( 1) 00:31:21.959 42.822 - 43.055: 99.7798% ( 1) 00:31:21.959 44.451 - 44.684: 99.7920% ( 1) 00:31:21.959 44.916 - 45.149: 99.8043% ( 1) 00:31:21.959 45.149 - 45.382: 99.8165% ( 1) 00:31:21.959 45.382 - 45.615: 99.8287% ( 1) 00:31:21.959 45.847 - 46.080: 99.8410% ( 1) 00:31:21.959 46.080 - 46.313: 99.8532% ( 1) 00:31:21.959 46.778 - 47.011: 99.8654% ( 1) 00:31:21.959 47.011 - 47.244: 99.8777% ( 1) 00:31:21.959 47.244 - 47.476: 99.8899% ( 1) 00:31:21.959 48.407 - 48.640: 99.9021% ( 1) 00:31:21.959 49.338 - 49.571: 99.9144% ( 1) 00:31:21.959 54.225 - 54.458: 99.9266% ( 1) 00:31:21.959 57.716 - 57.949: 99.9388% ( 1) 00:31:21.959 59.113 - 59.345: 99.9511% ( 1) 00:31:21.959 64.233 - 64.698: 99.9633% ( 1) 00:31:21.959 65.629 - 66.095: 99.9755% ( 1) 00:31:21.959 88.902 - 89.367: 99.9878% ( 1) 00:31:21.959 95.418 - 95.884: 100.0000% ( 1) 00:31:21.959 00:31:21.959 00:31:21.959 real 0m1.330s 00:31:21.959 user 0m1.119s 00:31:21.959 sys 0m0.129s 00:31:21.959 22:39:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.959 ************************************ 00:31:21.959 22:39:20 -- common/autotest_common.sh@10 -- # set +x 00:31:21.959 END TEST nvme_overhead 00:31:21.959 ************************************ 00:31:21.959 22:39:20 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:31:21.959 22:39:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:31:21.959 22:39:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:21.959 22:39:20 -- common/autotest_common.sh@10 -- # set +x 00:31:21.959 ************************************ 00:31:21.959 START TEST nvme_arbitration 00:31:21.959 ************************************ 00:31:21.959 22:39:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:31:26.144 Initializing NVMe Controllers 00:31:26.145 Attached to 0000:00:06.0 00:31:26.145 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:31:26.145 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:31:26.145 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:31:26.145 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:31:26.145 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:31:26.145 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:31:26.145 Initialization complete. Launching workers. 00:31:26.145 Starting thread on core 1 with urgent priority queue 00:31:26.145 Starting thread on core 2 with urgent priority queue 00:31:26.145 Starting thread on core 0 with urgent priority queue 00:31:26.145 Starting thread on core 3 with urgent priority queue 00:31:26.145 QEMU NVMe Ctrl (12340 ) core 0: 1045.33 IO/s 95.66 secs/100000 ios 00:31:26.145 QEMU NVMe Ctrl (12340 ) core 1: 1856.00 IO/s 53.88 secs/100000 ios 00:31:26.145 QEMU NVMe Ctrl (12340 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:31:26.145 QEMU NVMe Ctrl (12340 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:31:26.145 ======================================================== 00:31:26.145 00:31:26.145 00:31:26.145 real 0m3.460s 00:31:26.145 user 0m9.460s 00:31:26.145 sys 0m0.128s 00:31:26.145 22:39:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.145 22:39:24 -- common/autotest_common.sh@10 -- # set +x 00:31:26.145 ************************************ 00:31:26.145 END TEST nvme_arbitration 00:31:26.145 ************************************ 00:31:26.145 22:39:24 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:31:26.145 22:39:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:26.145 22:39:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:26.145 22:39:24 -- common/autotest_common.sh@10 -- # set +x 00:31:26.145 ************************************ 00:31:26.145 START TEST nvme_single_aen 00:31:26.145 ************************************ 00:31:26.145 22:39:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:31:26.145 [2024-07-14 22:39:24.501859] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:26.145 [2024-07-14 22:39:24.501969] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.145 [2024-07-14 22:39:24.686061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:26.145 Asynchronous Event Request test 00:31:26.145 Attached to 0000:00:06.0 00:31:26.145 Reset controller to setup AER completions for this process 00:31:26.145 Registering asynchronous event callbacks... 00:31:26.145 Getting orig temperature thresholds of all controllers 00:31:26.145 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:26.145 Setting all controllers temperature threshold low to trigger AER 00:31:26.145 Waiting for all controllers temperature threshold to be set lower 00:31:26.145 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:26.145 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:26.145 Waiting for all controllers to trigger AER and reset threshold 00:31:26.145 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:26.145 Cleaning up... 00:31:26.145 00:31:26.145 real 0m0.289s 00:31:26.145 user 0m0.091s 00:31:26.145 sys 0m0.130s 00:31:26.145 22:39:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.145 22:39:24 -- common/autotest_common.sh@10 -- # set +x 00:31:26.145 ************************************ 00:31:26.145 END TEST nvme_single_aen 00:31:26.145 ************************************ 00:31:26.145 22:39:24 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:31:26.145 22:39:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:26.145 22:39:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:26.145 22:39:24 -- common/autotest_common.sh@10 -- # set +x 00:31:26.145 ************************************ 00:31:26.145 START TEST nvme_doorbell_aers 00:31:26.145 ************************************ 00:31:26.145 22:39:24 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:31:26.145 22:39:24 -- nvme/nvme.sh@70 -- # bdfs=() 00:31:26.145 22:39:24 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:31:26.145 22:39:24 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:31:26.145 22:39:24 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:31:26.145 22:39:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:26.145 22:39:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:26.145 22:39:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:26.145 22:39:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:26.145 22:39:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:26.145 22:39:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:26.145 22:39:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:26.145 22:39:24 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:26.145 22:39:24 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:26.145 [2024-07-14 22:39:25.136680] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143755) is not found. Dropping the request. 00:31:36.145 Executing: test_write_invalid_db 00:31:36.145 Waiting for AER completion... 00:31:36.145 Failure: test_write_invalid_db 00:31:36.145 00:31:36.145 Executing: test_invalid_db_write_overflow_sq 00:31:36.145 Waiting for AER completion... 00:31:36.145 Failure: test_invalid_db_write_overflow_sq 00:31:36.145 00:31:36.145 Executing: test_invalid_db_write_overflow_cq 00:31:36.145 Waiting for AER completion... 00:31:36.145 Failure: test_invalid_db_write_overflow_cq 00:31:36.145 00:31:36.145 00:31:36.145 real 0m10.112s 00:31:36.145 user 0m8.524s 00:31:36.145 sys 0m1.534s 00:31:36.145 22:39:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.145 22:39:34 -- common/autotest_common.sh@10 -- # set +x 00:31:36.145 ************************************ 00:31:36.145 END TEST nvme_doorbell_aers 00:31:36.145 ************************************ 00:31:36.145 22:39:34 -- nvme/nvme.sh@97 -- # uname 00:31:36.145 22:39:34 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:36.145 22:39:34 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:36.145 22:39:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:31:36.145 22:39:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.145 22:39:34 -- common/autotest_common.sh@10 -- # set +x 00:31:36.145 ************************************ 00:31:36.145 START TEST nvme_multi_aen 00:31:36.145 ************************************ 00:31:36.146 22:39:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:36.146 [2024-07-14 22:39:35.013518] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:36.146 [2024-07-14 22:39:35.013679] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.146 [2024-07-14 22:39:35.213668] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:36.146 [2024-07-14 22:39:35.213742] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143755) is not found. Dropping the request. 00:31:36.146 [2024-07-14 22:39:35.213857] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143755) is not found. Dropping the request. 00:31:36.146 [2024-07-14 22:39:35.213896] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143755) is not found. Dropping the request. 00:31:36.146 [2024-07-14 22:39:35.217691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:36.146 Child process pid: 143971 00:31:36.146 [2024-07-14 22:39:35.217783] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.714 [Child] Asynchronous Event Request test 00:31:36.714 [Child] Attached to 0000:00:06.0 00:31:36.714 [Child] Registering asynchronous event callbacks... 00:31:36.714 [Child] Getting orig temperature thresholds of all controllers 00:31:36.714 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:36.714 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:36.714 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:36.714 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:36.714 [Child] Cleaning up... 00:31:36.714 Asynchronous Event Request test 00:31:36.714 Attached to 0000:00:06.0 00:31:36.714 Reset controller to setup AER completions for this process 00:31:36.714 Registering asynchronous event callbacks... 00:31:36.714 Getting orig temperature thresholds of all controllers 00:31:36.714 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:36.714 Setting all controllers temperature threshold low to trigger AER 00:31:36.714 Waiting for all controllers temperature threshold to be set lower 00:31:36.714 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:36.714 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:36.714 Waiting for all controllers to trigger AER and reset threshold 00:31:36.714 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:36.714 Cleaning up... 00:31:36.714 00:31:36.714 real 0m0.628s 00:31:36.714 user 0m0.204s 00:31:36.714 sys 0m0.238s 00:31:36.714 22:39:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.714 22:39:35 -- common/autotest_common.sh@10 -- # set +x 00:31:36.714 ************************************ 00:31:36.714 END TEST nvme_multi_aen 00:31:36.714 ************************************ 00:31:36.714 22:39:35 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:36.714 22:39:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:36.714 22:39:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.714 22:39:35 -- common/autotest_common.sh@10 -- # set +x 00:31:36.714 ************************************ 00:31:36.714 START TEST nvme_startup 00:31:36.714 ************************************ 00:31:36.715 22:39:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:36.974 Initializing NVMe Controllers 00:31:36.974 Attached to 0000:00:06.0 00:31:36.974 Initialization complete. 00:31:36.974 Time used:227311.703 (us). 00:31:36.974 00:31:36.974 real 0m0.327s 00:31:36.974 user 0m0.087s 00:31:36.974 sys 0m0.128s 00:31:36.974 22:39:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.974 22:39:35 -- common/autotest_common.sh@10 -- # set +x 00:31:36.974 ************************************ 00:31:36.974 END TEST nvme_startup 00:31:36.974 ************************************ 00:31:36.974 22:39:36 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:36.974 22:39:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:36.974 22:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.974 22:39:36 -- common/autotest_common.sh@10 -- # set +x 00:31:36.974 ************************************ 00:31:36.974 START TEST nvme_multi_secondary 00:31:36.974 ************************************ 00:31:36.974 22:39:36 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:31:36.974 22:39:36 -- nvme/nvme.sh@52 -- # pid0=144048 00:31:36.974 22:39:36 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:36.974 22:39:36 -- nvme/nvme.sh@54 -- # pid1=144049 00:31:36.974 22:39:36 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:36.974 22:39:36 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:41.166 Initializing NVMe Controllers 00:31:41.166 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:41.166 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:41.166 Initialization complete. Launching workers. 00:31:41.166 ======================================================== 00:31:41.166 Latency(us) 00:31:41.166 Device Information : IOPS MiB/s Average min max 00:31:41.166 PCIE (0000:00:06.0) NSID 1 from core 2: 15895.21 62.09 1005.71 87.64 16893.31 00:31:41.166 ======================================================== 00:31:41.166 Total : 15895.21 62.09 1005.71 87.64 16893.31 00:31:41.166 00:31:41.166 22:39:39 -- nvme/nvme.sh@56 -- # wait 144048 00:31:41.166 Initializing NVMe Controllers 00:31:41.166 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:41.166 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:41.166 Initialization complete. Launching workers. 00:31:41.166 ======================================================== 00:31:41.166 Latency(us) 00:31:41.166 Device Information : IOPS MiB/s Average min max 00:31:41.166 PCIE (0000:00:06.0) NSID 1 from core 1: 34850.01 136.13 458.77 132.41 1929.77 00:31:41.167 ======================================================== 00:31:41.167 Total : 34850.01 136.13 458.77 132.41 1929.77 00:31:41.167 00:31:43.070 Initializing NVMe Controllers 00:31:43.070 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:43.070 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:43.070 Initialization complete. Launching workers. 00:31:43.070 ======================================================== 00:31:43.070 Latency(us) 00:31:43.070 Device Information : IOPS MiB/s Average min max 00:31:43.070 PCIE (0000:00:06.0) NSID 1 from core 0: 46344.52 181.03 344.93 87.50 1679.06 00:31:43.070 ======================================================== 00:31:43.070 Total : 46344.52 181.03 344.93 87.50 1679.06 00:31:43.070 00:31:43.070 22:39:41 -- nvme/nvme.sh@57 -- # wait 144049 00:31:43.070 22:39:41 -- nvme/nvme.sh@61 -- # pid0=144124 00:31:43.070 22:39:41 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:43.070 22:39:41 -- nvme/nvme.sh@63 -- # pid1=144125 00:31:43.070 22:39:41 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:43.070 22:39:41 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:46.354 Initializing NVMe Controllers 00:31:46.354 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:46.354 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:46.354 Initialization complete. Launching workers. 00:31:46.354 ======================================================== 00:31:46.354 Latency(us) 00:31:46.354 Device Information : IOPS MiB/s Average min max 00:31:46.354 PCIE (0000:00:06.0) NSID 1 from core 0: 36521.99 142.66 437.77 122.55 1392.19 00:31:46.354 ======================================================== 00:31:46.354 Total : 36521.99 142.66 437.77 122.55 1392.19 00:31:46.354 00:31:46.354 Initializing NVMe Controllers 00:31:46.354 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:46.354 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:46.354 Initialization complete. Launching workers. 00:31:46.354 ======================================================== 00:31:46.354 Latency(us) 00:31:46.354 Device Information : IOPS MiB/s Average min max 00:31:46.354 PCIE (0000:00:06.0) NSID 1 from core 1: 34051.51 133.01 469.50 136.71 1410.35 00:31:46.354 ======================================================== 00:31:46.354 Total : 34051.51 133.01 469.50 136.71 1410.35 00:31:46.354 00:31:48.255 Initializing NVMe Controllers 00:31:48.255 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:48.255 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:48.255 Initialization complete. Launching workers. 00:31:48.255 ======================================================== 00:31:48.255 Latency(us) 00:31:48.255 Device Information : IOPS MiB/s Average min max 00:31:48.255 PCIE (0000:00:06.0) NSID 1 from core 2: 19387.62 75.73 824.83 141.51 28341.26 00:31:48.255 ======================================================== 00:31:48.255 Total : 19387.62 75.73 824.83 141.51 28341.26 00:31:48.255 00:31:48.255 22:39:47 -- nvme/nvme.sh@65 -- # wait 144124 00:31:48.255 22:39:47 -- nvme/nvme.sh@66 -- # wait 144125 00:31:48.255 00:31:48.255 real 0m11.216s 00:31:48.255 user 0m18.803s 00:31:48.255 sys 0m0.823s 00:31:48.255 22:39:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.255 ************************************ 00:31:48.255 22:39:47 -- common/autotest_common.sh@10 -- # set +x 00:31:48.255 END TEST nvme_multi_secondary 00:31:48.255 ************************************ 00:31:48.255 22:39:47 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:48.255 22:39:47 -- nvme/nvme.sh@102 -- # kill_stub 00:31:48.255 22:39:47 -- common/autotest_common.sh@1065 -- # [[ -e /proc/143299 ]] 00:31:48.255 22:39:47 -- common/autotest_common.sh@1066 -- # kill 143299 00:31:48.255 22:39:47 -- common/autotest_common.sh@1067 -- # wait 143299 00:31:49.190 [2024-07-14 22:39:47.919055] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143966) is not found. Dropping the request. 00:31:49.190 [2024-07-14 22:39:47.919200] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143966) is not found. Dropping the request. 00:31:49.190 [2024-07-14 22:39:47.919261] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143966) is not found. Dropping the request. 00:31:49.190 [2024-07-14 22:39:47.919293] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143966) is not found. Dropping the request. 00:31:49.190 22:39:48 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:31:49.190 22:39:48 -- common/autotest_common.sh@1073 -- # echo 2 00:31:49.190 22:39:48 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:49.190 22:39:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:49.190 22:39:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:49.190 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:31:49.190 ************************************ 00:31:49.190 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:49.190 ************************************ 00:31:49.190 22:39:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:49.448 * Looking for test storage... 00:31:49.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:49.448 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:49.448 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:49.448 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:49.448 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:49.448 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:49.448 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:49.448 22:39:48 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:49.448 22:39:48 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:49.448 22:39:48 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:49.448 22:39:48 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:49.448 22:39:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:49.448 22:39:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:49.449 22:39:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:49.449 22:39:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:49.449 22:39:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:49.449 22:39:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:49.449 22:39:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:49.449 22:39:48 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:49.449 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:49.449 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:49.449 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:49.449 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=144298 00:31:49.449 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:49.449 22:39:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 144298 00:31:49.449 22:39:48 -- common/autotest_common.sh@819 -- # '[' -z 144298 ']' 00:31:49.449 22:39:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.449 22:39:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:49.449 22:39:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.449 22:39:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:49.449 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:31:49.449 [2024-07-14 22:39:48.425780] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:49.449 [2024-07-14 22:39:48.426947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144298 ] 00:31:49.708 [2024-07-14 22:39:48.624063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:49.966 [2024-07-14 22:39:48.867568] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:49.966 [2024-07-14 22:39:48.867988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.966 [2024-07-14 22:39:48.868414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.966 [2024-07-14 22:39:48.869101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.966 [2024-07-14 22:39:48.869153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.343 22:39:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:51.343 22:39:50 -- common/autotest_common.sh@852 -- # return 0 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:51.343 22:39:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.343 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:31:51.343 nvme0n1 00:31:51.343 22:39:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_0eatw.txt 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:51.343 22:39:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.343 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:31:51.343 true 00:31:51.343 22:39:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720996790 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=144340 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:51.343 22:39:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:53.241 22:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.241 22:39:52 -- common/autotest_common.sh@10 -- # set +x 00:31:53.241 [2024-07-14 22:39:52.259491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:53.241 [2024-07-14 22:39:52.260430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:53.241 [2024-07-14 22:39:52.260531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:53.241 [2024-07-14 22:39:52.260568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.241 [2024-07-14 22:39:52.262893] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:53.241 22:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 144340 00:31:53.241 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 144340 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 144340 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.241 22:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.241 22:39:52 -- common/autotest_common.sh@10 -- # set +x 00:31:53.241 22:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:53.241 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_0eatw.txt 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_0eatw.txt 00:31:53.499 22:39:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 144298 00:31:53.499 22:39:52 -- common/autotest_common.sh@926 -- # '[' -z 144298 ']' 00:31:53.499 22:39:52 -- common/autotest_common.sh@930 -- # kill -0 144298 00:31:53.499 22:39:52 -- common/autotest_common.sh@931 -- # uname 00:31:53.499 22:39:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:53.499 22:39:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144298 00:31:53.499 22:39:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:53.499 22:39:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:53.499 22:39:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144298' 00:31:53.499 killing process with pid 144298 00:31:53.499 22:39:52 -- common/autotest_common.sh@945 -- # kill 144298 00:31:53.499 22:39:52 -- common/autotest_common.sh@950 -- # wait 144298 00:31:55.400 22:39:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:55.400 22:39:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:55.400 00:31:55.400 real 0m6.223s 00:31:55.400 user 0m22.291s 00:31:55.400 sys 0m0.559s 00:31:55.400 22:39:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.400 22:39:54 -- common/autotest_common.sh@10 -- # set +x 00:31:55.400 ************************************ 00:31:55.400 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:55.400 ************************************ 00:31:55.400 22:39:54 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:55.400 22:39:54 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:55.400 22:39:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:55.400 22:39:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:55.400 22:39:54 -- common/autotest_common.sh@10 -- # set +x 00:31:55.737 ************************************ 00:31:55.737 START TEST nvme_fio 00:31:55.737 ************************************ 00:31:55.737 22:39:54 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:31:55.737 22:39:54 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:55.737 22:39:54 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:55.737 22:39:54 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:31:55.737 22:39:54 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:55.737 22:39:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:55.737 22:39:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:55.737 22:39:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:55.737 22:39:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:55.737 22:39:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:55.737 22:39:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:55.737 22:39:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:55.737 22:39:54 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:55.737 22:39:54 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:55.737 22:39:54 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:55.737 22:39:54 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:55.737 22:39:54 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:55.737 22:39:54 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:55.995 22:39:55 -- nvme/nvme.sh@41 -- # bs=4096 00:31:55.995 22:39:55 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:55.995 22:39:55 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:55.995 22:39:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:55.995 22:39:55 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:31:55.995 22:39:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:55.995 22:39:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:55.995 22:39:55 -- common/autotest_common.sh@1320 -- # shift 00:31:55.995 22:39:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:55.995 22:39:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.995 22:39:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:55.995 22:39:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:55.995 22:39:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:55.995 22:39:55 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:55.995 22:39:55 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:55.995 22:39:55 -- common/autotest_common.sh@1326 -- # break 00:31:55.995 22:39:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:55.995 22:39:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:56.253 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:56.253 fio-3.35 00:31:56.253 Starting 1 thread 00:31:59.535 00:31:59.535 test: (groupid=0, jobs=1): err= 0: pid=144488: Sun Jul 14 22:39:58 2024 00:31:59.535 read: IOPS=16.9k, BW=66.0MiB/s (69.3MB/s)(132MiB/2001msec) 00:31:59.535 slat (nsec): min=4767, max=93281, avg=6064.02, stdev=1759.64 00:31:59.535 clat (usec): min=276, max=8689, avg=3756.49, stdev=497.39 00:31:59.535 lat (usec): min=282, max=8778, avg=3762.56, stdev=498.02 00:31:59.535 clat percentiles (usec): 00:31:59.535 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3458], 00:31:59.535 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3654], 00:31:59.535 | 70.00th=[ 3752], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4490], 00:31:59.535 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7373], 99.95th=[ 7504], 00:31:59.535 | 99.99th=[ 8455] 00:31:59.535 bw ( KiB/s): min=65280, max=67928, per=98.71%, avg=66760.00, stdev=1351.29, samples=3 00:31:59.535 iops : min=16320, max=16982, avg=16690.00, stdev=337.82, samples=3 00:31:59.535 write: IOPS=17.0k, BW=66.2MiB/s (69.4MB/s)(133MiB/2001msec); 0 zone resets 00:31:59.535 slat (usec): min=4, max=103, avg= 6.27, stdev= 1.85 00:31:59.535 clat (usec): min=330, max=8556, avg=3773.66, stdev=505.02 00:31:59.535 lat (usec): min=335, max=8575, avg=3779.93, stdev=505.62 00:31:59.535 clat percentiles (usec): 00:31:59.535 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:31:59.535 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3687], 00:31:59.536 | 70.00th=[ 3752], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4490], 00:31:59.536 | 99.00th=[ 5932], 99.50th=[ 6325], 99.90th=[ 7373], 99.95th=[ 7439], 00:31:59.536 | 99.99th=[ 8356] 00:31:59.536 bw ( KiB/s): min=65792, max=67360, per=98.36%, avg=66704.00, stdev=814.74, samples=3 00:31:59.536 iops : min=16448, max=16840, avg=16676.00, stdev=203.69, samples=3 00:31:59.536 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:59.536 lat (msec) : 2=0.05%, 4=76.58%, 10=23.34% 00:31:59.536 cpu : usr=100.10%, sys=0.00%, ctx=27, majf=0, minf=36 00:31:59.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:59.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:59.536 issued rwts: total=33832,33924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:59.536 00:31:59.536 Run status group 0 (all jobs): 00:31:59.536 READ: bw=66.0MiB/s (69.3MB/s), 66.0MiB/s-66.0MiB/s (69.3MB/s-69.3MB/s), io=132MiB (139MB), run=2001-2001msec 00:31:59.536 WRITE: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=133MiB (139MB), run=2001-2001msec 00:31:59.793 ----------------------------------------------------- 00:31:59.793 Suppressions used: 00:31:59.793 count bytes template 00:31:59.793 1 32 /usr/src/fio/parse.c 00:31:59.793 ----------------------------------------------------- 00:31:59.793 00:31:59.793 22:39:58 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:59.793 22:39:58 -- nvme/nvme.sh@46 -- # true 00:31:59.793 00:31:59.793 real 0m4.283s 00:31:59.793 user 0m3.570s 00:31:59.793 sys 0m0.386s 00:31:59.793 22:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.793 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:31:59.793 ************************************ 00:31:59.793 END TEST nvme_fio 00:31:59.793 ************************************ 00:31:59.793 00:31:59.793 real 0m48.550s 00:31:59.793 user 2m8.748s 00:31:59.793 sys 0m8.462s 00:31:59.793 22:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.793 ************************************ 00:31:59.793 END TEST nvme 00:31:59.793 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:31:59.793 ************************************ 00:31:59.793 22:39:58 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:59.793 22:39:58 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:59.793 22:39:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:59.793 22:39:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:59.793 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:32:00.052 ************************************ 00:32:00.052 START TEST nvme_scc 00:32:00.052 ************************************ 00:32:00.052 22:39:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:00.052 * Looking for test storage... 00:32:00.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:00.052 22:39:58 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:00.052 22:39:58 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:00.052 22:39:58 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:00.052 22:39:58 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:00.052 22:39:58 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:00.052 22:39:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.052 22:39:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.052 22:39:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.052 22:39:58 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.052 22:39:58 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.052 22:39:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.052 22:39:58 -- paths/export.sh@5 -- # export PATH 00:32:00.052 22:39:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.052 22:39:58 -- nvme/functions.sh@10 -- # ctrls=() 00:32:00.052 22:39:58 -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:00.052 22:39:58 -- nvme/functions.sh@11 -- # nvmes=() 00:32:00.052 22:39:58 -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:00.052 22:39:58 -- nvme/functions.sh@12 -- # bdfs=() 00:32:00.052 22:39:58 -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:00.052 22:39:58 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:00.052 22:39:58 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:00.052 22:39:58 -- nvme/functions.sh@14 -- # nvme_name= 00:32:00.052 22:39:58 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:00.052 22:39:58 -- nvme/nvme_scc.sh@12 -- # uname 00:32:00.052 22:39:58 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:00.052 22:39:58 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:00.052 22:39:58 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:00.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:00.311 Waiting for block devices as requested 00:32:00.311 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:00.572 22:39:59 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:00.572 22:39:59 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:00.572 22:39:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:00.572 22:39:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:32:00.572 22:39:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:32:00.572 22:39:59 -- scripts/common.sh@15 -- # local i 00:32:00.572 22:39:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:32:00.572 22:39:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:32:00.572 22:39:59 -- scripts/common.sh@24 -- # return 0 00:32:00.572 22:39:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:00.572 22:39:59 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:00.572 22:39:59 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@18 -- # shift 00:32:00.572 22:39:59 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.572 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.572 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:00.572 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.573 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:00.573 22:39:59 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:00.573 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.574 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.574 22:39:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:00.574 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:00.575 22:39:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:00.575 22:39:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:00.575 22:39:59 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:00.575 22:39:59 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@18 -- # shift 00:32:00.575 22:39:59 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.575 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.575 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.575 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:00.576 22:39:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # IFS=: 00:32:00.576 22:39:59 -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.576 22:39:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:00.576 22:39:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:00.576 22:39:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:32:00.576 22:39:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:32:00.576 22:39:59 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:00.576 22:39:59 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:32:00.576 22:39:59 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:00.576 22:39:59 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:32:00.576 22:39:59 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:32:00.576 22:39:59 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:32:00.576 22:39:59 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:32:00.576 22:39:59 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:32:00.576 22:39:59 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:32:00.576 22:39:59 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:00.576 22:39:59 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:00.576 22:39:59 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:00.576 22:39:59 -- nvme/functions.sh@76 -- # echo 0x15d 00:32:00.576 22:39:59 -- nvme/functions.sh@184 -- # oncs=0x15d 00:32:00.576 22:39:59 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:32:00.576 22:39:59 -- nvme/functions.sh@197 -- # echo nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:32:00.576 22:39:59 -- nvme/functions.sh@206 -- # echo nvme0 00:32:00.576 22:39:59 -- nvme/functions.sh@207 -- # return 0 00:32:00.576 22:39:59 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:32:00.576 22:39:59 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:32:00.576 22:39:59 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:00.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:01.093 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:32:02.030 22:40:00 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:32:02.030 22:40:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:32:02.030 22:40:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:02.030 22:40:00 -- common/autotest_common.sh@10 -- # set +x 00:32:02.030 ************************************ 00:32:02.030 START TEST nvme_simple_copy 00:32:02.030 ************************************ 00:32:02.030 22:40:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:32:02.288 Initializing NVMe Controllers 00:32:02.288 Attaching to 0000:00:06.0 00:32:02.288 Controller supports SCC. Attached to 0000:00:06.0 00:32:02.288 Namespace ID: 1 size: 5GB 00:32:02.288 Initialization complete. 00:32:02.288 00:32:02.288 Controller QEMU NVMe Ctrl (12340 ) 00:32:02.288 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:02.288 Namespace Block Size:4096 00:32:02.288 Writing LBAs 0 to 63 with Random Data 00:32:02.288 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:02.288 LBAs matching Written Data: 64 00:32:02.288 00:32:02.288 real 0m0.287s 00:32:02.288 user 0m0.117s 00:32:02.288 sys 0m0.072s 00:32:02.288 22:40:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.288 ************************************ 00:32:02.288 END TEST nvme_simple_copy 00:32:02.288 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:32:02.288 ************************************ 00:32:02.288 ************************************ 00:32:02.288 END TEST nvme_scc 00:32:02.288 ************************************ 00:32:02.288 00:32:02.288 real 0m2.430s 00:32:02.288 user 0m0.746s 00:32:02.288 sys 0m1.550s 00:32:02.288 22:40:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.288 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:32:02.288 22:40:01 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:32:02.288 22:40:01 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:32:02.288 22:40:01 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:32:02.288 22:40:01 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:32:02.288 22:40:01 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:32:02.288 22:40:01 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:02.288 22:40:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:02.288 22:40:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:02.288 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:32:02.288 ************************************ 00:32:02.288 START TEST nvme_rpc 00:32:02.288 ************************************ 00:32:02.289 22:40:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:02.547 * Looking for test storage... 00:32:02.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:02.547 22:40:01 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:02.547 22:40:01 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:02.547 22:40:01 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:02.547 22:40:01 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:02.547 22:40:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:02.547 22:40:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:02.547 22:40:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:02.547 22:40:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:02.547 22:40:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:02.547 22:40:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:02.547 22:40:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:32:02.547 22:40:01 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=144990 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 144990 00:32:02.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.547 22:40:01 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:02.547 22:40:01 -- common/autotest_common.sh@819 -- # '[' -z 144990 ']' 00:32:02.547 22:40:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.547 22:40:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:02.547 22:40:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.547 22:40:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:02.547 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:32:02.547 [2024-07-14 22:40:01.551558] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:02.547 [2024-07-14 22:40:01.552553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144990 ] 00:32:02.805 [2024-07-14 22:40:01.726859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:03.063 [2024-07-14 22:40:01.969201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:03.063 [2024-07-14 22:40:01.969893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.063 [2024-07-14 22:40:01.969898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.439 22:40:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:04.439 22:40:03 -- common/autotest_common.sh@852 -- # return 0 00:32:04.439 22:40:03 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:32:04.696 Nvme0n1 00:32:04.696 22:40:03 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:04.696 22:40:03 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:04.953 request: 00:32:04.953 { 00:32:04.953 "filename": "non_existing_file", 00:32:04.953 "bdev_name": "Nvme0n1", 00:32:04.953 "method": "bdev_nvme_apply_firmware", 00:32:04.953 "req_id": 1 00:32:04.953 } 00:32:04.953 Got JSON-RPC error response 00:32:04.953 response: 00:32:04.953 { 00:32:04.953 "code": -32603, 00:32:04.953 "message": "open file failed." 00:32:04.953 } 00:32:04.953 22:40:03 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:04.953 22:40:03 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:04.953 22:40:03 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:04.953 22:40:04 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:04.953 22:40:04 -- nvme/nvme_rpc.sh@40 -- # killprocess 144990 00:32:04.953 22:40:04 -- common/autotest_common.sh@926 -- # '[' -z 144990 ']' 00:32:04.953 22:40:04 -- common/autotest_common.sh@930 -- # kill -0 144990 00:32:04.953 22:40:04 -- common/autotest_common.sh@931 -- # uname 00:32:04.953 22:40:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:04.953 22:40:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144990 00:32:05.211 22:40:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:05.211 22:40:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:05.211 killing process with pid 144990 00:32:05.211 22:40:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144990' 00:32:05.211 22:40:04 -- common/autotest_common.sh@945 -- # kill 144990 00:32:05.211 22:40:04 -- common/autotest_common.sh@950 -- # wait 144990 00:32:07.110 00:32:07.110 real 0m4.555s 00:32:07.110 user 0m8.849s 00:32:07.110 sys 0m0.672s 00:32:07.110 22:40:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.110 ************************************ 00:32:07.110 END TEST nvme_rpc 00:32:07.110 ************************************ 00:32:07.110 22:40:05 -- common/autotest_common.sh@10 -- # set +x 00:32:07.110 22:40:05 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:07.110 22:40:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:07.110 22:40:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.110 22:40:05 -- common/autotest_common.sh@10 -- # set +x 00:32:07.110 ************************************ 00:32:07.110 START TEST nvme_rpc_timeouts 00:32:07.110 ************************************ 00:32:07.110 22:40:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:07.110 * Looking for test storage... 00:32:07.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:07.110 22:40:06 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:07.110 22:40:06 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_145077 00:32:07.110 22:40:06 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_145077 00:32:07.110 22:40:06 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=145102 00:32:07.110 22:40:06 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:07.110 22:40:06 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:07.111 22:40:06 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 145102 00:32:07.111 22:40:06 -- common/autotest_common.sh@819 -- # '[' -z 145102 ']' 00:32:07.111 22:40:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.111 22:40:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:07.111 22:40:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.111 22:40:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:07.111 22:40:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.111 [2024-07-14 22:40:06.121250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:07.111 [2024-07-14 22:40:06.121459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145102 ] 00:32:07.368 [2024-07-14 22:40:06.294113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:07.626 [2024-07-14 22:40:06.488901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:07.626 [2024-07-14 22:40:06.489852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.626 [2024-07-14 22:40:06.489872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.002 22:40:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.002 Checking default timeout settings: 00:32:09.002 22:40:07 -- common/autotest_common.sh@852 -- # return 0 00:32:09.002 22:40:07 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:09.002 22:40:07 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:09.259 Making settings changes with rpc: 00:32:09.259 22:40:08 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:09.259 22:40:08 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:09.517 Check default vs. modified settings: 00:32:09.517 22:40:08 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:09.517 22:40:08 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.777 Setting action_on_timeout is changed as expected. 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.777 Setting timeout_us is changed as expected. 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.777 Setting timeout_admin_us is changed as expected. 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_145077 /tmp/settings_modified_145077 00:32:09.777 22:40:08 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 145102 00:32:09.777 22:40:08 -- common/autotest_common.sh@926 -- # '[' -z 145102 ']' 00:32:09.777 22:40:08 -- common/autotest_common.sh@930 -- # kill -0 145102 00:32:09.777 22:40:08 -- common/autotest_common.sh@931 -- # uname 00:32:09.777 22:40:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:09.777 22:40:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145102 00:32:09.777 killing process with pid 145102 00:32:09.777 22:40:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:09.777 22:40:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:09.777 22:40:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145102' 00:32:09.777 22:40:08 -- common/autotest_common.sh@945 -- # kill 145102 00:32:09.777 22:40:08 -- common/autotest_common.sh@950 -- # wait 145102 00:32:11.679 RPC TIMEOUT SETTING TEST PASSED. 00:32:11.679 22:40:10 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:11.679 00:32:11.679 real 0m4.789s 00:32:11.679 user 0m9.477s 00:32:11.679 sys 0m0.619s 00:32:11.679 ************************************ 00:32:11.679 END TEST nvme_rpc_timeouts 00:32:11.679 ************************************ 00:32:11.679 22:40:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.679 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.937 22:40:10 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:32:11.938 22:40:10 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@268 -- # timing_exit lib 00:32:11.938 22:40:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:11.938 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.938 22:40:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:11.938 22:40:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:11.938 22:40:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:11.938 22:40:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:11.938 22:40:10 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:32:11.938 22:40:10 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:32:11.938 22:40:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:11.938 22:40:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.938 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.938 ************************************ 00:32:11.938 START TEST blockdev_raid5f 00:32:11.938 ************************************ 00:32:11.938 22:40:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:32:11.938 * Looking for test storage... 00:32:11.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:32:11.938 22:40:10 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:32:11.938 22:40:10 -- bdev/nbd_common.sh@6 -- # set -e 00:32:11.938 22:40:10 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:11.938 22:40:10 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:11.938 22:40:10 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:32:11.938 22:40:10 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:32:11.938 22:40:10 -- bdev/blockdev.sh@18 -- # : 00:32:11.938 22:40:10 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:32:11.938 22:40:10 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:32:11.938 22:40:10 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:32:11.938 22:40:10 -- bdev/blockdev.sh@672 -- # uname -s 00:32:11.938 22:40:10 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:32:11.938 22:40:10 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:32:11.938 22:40:10 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:32:11.938 22:40:10 -- bdev/blockdev.sh@681 -- # crypto_device= 00:32:11.938 22:40:10 -- bdev/blockdev.sh@682 -- # dek= 00:32:11.938 22:40:10 -- bdev/blockdev.sh@683 -- # env_ctx= 00:32:11.938 22:40:10 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:32:11.938 22:40:10 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:32:11.938 22:40:10 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:32:11.938 22:40:10 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:32:11.938 22:40:10 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:32:11.938 22:40:10 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=145276 00:32:11.938 22:40:10 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:11.938 22:40:10 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:11.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.938 22:40:10 -- bdev/blockdev.sh@47 -- # waitforlisten 145276 00:32:11.938 22:40:10 -- common/autotest_common.sh@819 -- # '[' -z 145276 ']' 00:32:11.938 22:40:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.938 22:40:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:11.938 22:40:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.938 22:40:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:11.938 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:32:12.197 [2024-07-14 22:40:11.025862] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:12.197 [2024-07-14 22:40:11.026993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145276 ] 00:32:12.197 [2024-07-14 22:40:11.197220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.458 [2024-07-14 22:40:11.392251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:12.458 [2024-07-14 22:40:11.392672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.838 22:40:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:13.838 22:40:12 -- common/autotest_common.sh@852 -- # return 0 00:32:13.838 22:40:12 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:32:13.838 22:40:12 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:32:13.838 22:40:12 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:32:13.838 22:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.838 22:40:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.838 Malloc0 00:32:13.838 Malloc1 00:32:13.838 Malloc2 00:32:13.838 22:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.838 22:40:12 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:32:13.838 22:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.838 22:40:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.838 22:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.838 22:40:12 -- bdev/blockdev.sh@738 -- # cat 00:32:13.838 22:40:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:32:13.838 22:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.838 22:40:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.838 22:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.838 22:40:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:32:13.838 22:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.838 22:40:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.838 22:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.838 22:40:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:13.838 22:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.838 22:40:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.838 22:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.838 22:40:12 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:32:13.838 22:40:12 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:32:13.838 22:40:12 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:32:13.838 22:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.838 22:40:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.838 22:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.838 22:40:12 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:32:13.838 22:40:12 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "771d1efb-3a9a-41ac-9569-c95b13d9d59b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "771d1efb-3a9a-41ac-9569-c95b13d9d59b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "771d1efb-3a9a-41ac-9569-c95b13d9d59b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "50146396-c77d-4236-868d-020343ff74fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "68cba21e-48f7-4eab-ac04-73e35d29c58f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "851f0bd6-8109-4e7b-840c-85a216098f02",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:32:13.838 22:40:12 -- bdev/blockdev.sh@747 -- # jq -r .name 00:32:14.098 22:40:12 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:32:14.098 22:40:12 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:32:14.098 22:40:12 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:32:14.098 22:40:12 -- bdev/blockdev.sh@752 -- # killprocess 145276 00:32:14.098 22:40:12 -- common/autotest_common.sh@926 -- # '[' -z 145276 ']' 00:32:14.098 22:40:12 -- common/autotest_common.sh@930 -- # kill -0 145276 00:32:14.098 22:40:12 -- common/autotest_common.sh@931 -- # uname 00:32:14.098 22:40:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:14.098 22:40:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145276 00:32:14.098 22:40:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:14.098 killing process with pid 145276 00:32:14.098 22:40:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:14.098 22:40:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145276' 00:32:14.098 22:40:12 -- common/autotest_common.sh@945 -- # kill 145276 00:32:14.098 22:40:12 -- common/autotest_common.sh@950 -- # wait 145276 00:32:16.628 22:40:15 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:16.628 22:40:15 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:32:16.628 22:40:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:16.628 22:40:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:16.628 22:40:15 -- common/autotest_common.sh@10 -- # set +x 00:32:16.628 ************************************ 00:32:16.628 START TEST bdev_hello_world 00:32:16.628 ************************************ 00:32:16.628 22:40:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:32:16.628 [2024-07-14 22:40:15.254036] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:16.628 [2024-07-14 22:40:15.254574] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145356 ] 00:32:16.628 [2024-07-14 22:40:15.427068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.628 [2024-07-14 22:40:15.616439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.195 [2024-07-14 22:40:16.089940] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:17.195 [2024-07-14 22:40:16.090327] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:32:17.195 [2024-07-14 22:40:16.090412] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:17.195 [2024-07-14 22:40:16.091126] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:17.195 [2024-07-14 22:40:16.091481] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:17.195 [2024-07-14 22:40:16.091633] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:17.195 [2024-07-14 22:40:16.091755] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:17.195 00:32:17.195 [2024-07-14 22:40:16.091935] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:18.572 ************************************ 00:32:18.572 END TEST bdev_hello_world 00:32:18.572 ************************************ 00:32:18.572 00:32:18.572 real 0m2.155s 00:32:18.572 user 0m1.757s 00:32:18.572 sys 0m0.277s 00:32:18.572 22:40:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.572 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:32:18.572 22:40:17 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:32:18.572 22:40:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:18.572 22:40:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:18.572 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:32:18.572 ************************************ 00:32:18.572 START TEST bdev_bounds 00:32:18.572 ************************************ 00:32:18.572 22:40:17 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:32:18.572 22:40:17 -- bdev/blockdev.sh@288 -- # bdevio_pid=145422 00:32:18.572 22:40:17 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:18.572 22:40:17 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:18.572 22:40:17 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 145422' 00:32:18.572 Process bdevio pid: 145422 00:32:18.572 22:40:17 -- bdev/blockdev.sh@291 -- # waitforlisten 145422 00:32:18.572 22:40:17 -- common/autotest_common.sh@819 -- # '[' -z 145422 ']' 00:32:18.572 22:40:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.572 22:40:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:18.572 22:40:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.572 22:40:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:18.572 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:32:18.572 [2024-07-14 22:40:17.460020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:18.572 [2024-07-14 22:40:17.461178] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145422 ] 00:32:18.831 [2024-07-14 22:40:17.652405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:18.831 [2024-07-14 22:40:17.847492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.831 [2024-07-14 22:40:17.847653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.831 [2024-07-14 22:40:17.847662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.397 22:40:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:19.397 22:40:18 -- common/autotest_common.sh@852 -- # return 0 00:32:19.397 22:40:18 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:19.656 I/O targets: 00:32:19.656 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:32:19.656 00:32:19.656 00:32:19.656 CUnit - A unit testing framework for C - Version 2.1-3 00:32:19.656 http://cunit.sourceforge.net/ 00:32:19.656 00:32:19.656 00:32:19.656 Suite: bdevio tests on: raid5f 00:32:19.656 Test: blockdev write read block ...passed 00:32:19.656 Test: blockdev write zeroes read block ...passed 00:32:19.656 Test: blockdev write zeroes read no split ...passed 00:32:19.656 Test: blockdev write zeroes read split ...passed 00:32:19.915 Test: blockdev write zeroes read split partial ...passed 00:32:19.915 Test: blockdev reset ...passed 00:32:19.915 Test: blockdev write read 8 blocks ...passed 00:32:19.915 Test: blockdev write read size > 128k ...passed 00:32:19.915 Test: blockdev write read invalid size ...passed 00:32:19.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:19.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:19.915 Test: blockdev write read max offset ...passed 00:32:19.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:19.915 Test: blockdev writev readv 8 blocks ...passed 00:32:19.915 Test: blockdev writev readv 30 x 1block ...passed 00:32:19.915 Test: blockdev writev readv block ...passed 00:32:19.915 Test: blockdev writev readv size > 128k ...passed 00:32:19.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:19.915 Test: blockdev comparev and writev ...passed 00:32:19.915 Test: blockdev nvme passthru rw ...passed 00:32:19.915 Test: blockdev nvme passthru vendor specific ...passed 00:32:19.915 Test: blockdev nvme admin passthru ...passed 00:32:19.915 Test: blockdev copy ...passed 00:32:19.915 00:32:19.915 Run Summary: Type Total Ran Passed Failed Inactive 00:32:19.915 suites 1 1 n/a 0 0 00:32:19.915 tests 23 23 23 0 0 00:32:19.915 asserts 130 130 130 0 n/a 00:32:19.915 00:32:19.915 Elapsed time = 0.501 seconds 00:32:19.915 0 00:32:19.915 22:40:18 -- bdev/blockdev.sh@293 -- # killprocess 145422 00:32:19.915 22:40:18 -- common/autotest_common.sh@926 -- # '[' -z 145422 ']' 00:32:19.915 22:40:18 -- common/autotest_common.sh@930 -- # kill -0 145422 00:32:19.915 22:40:18 -- common/autotest_common.sh@931 -- # uname 00:32:19.915 22:40:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:19.915 22:40:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145422 00:32:19.915 22:40:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:19.915 killing process with pid 145422 00:32:19.915 22:40:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:19.915 22:40:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145422' 00:32:19.915 22:40:18 -- common/autotest_common.sh@945 -- # kill 145422 00:32:19.915 22:40:18 -- common/autotest_common.sh@950 -- # wait 145422 00:32:21.293 ************************************ 00:32:21.293 END TEST bdev_bounds 00:32:21.293 ************************************ 00:32:21.293 22:40:20 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:32:21.293 00:32:21.293 real 0m2.634s 00:32:21.293 user 0m6.259s 00:32:21.293 sys 0m0.391s 00:32:21.293 22:40:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.293 22:40:20 -- common/autotest_common.sh@10 -- # set +x 00:32:21.293 22:40:20 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:32:21.293 22:40:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:32:21.293 22:40:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:21.293 22:40:20 -- common/autotest_common.sh@10 -- # set +x 00:32:21.293 ************************************ 00:32:21.293 START TEST bdev_nbd 00:32:21.293 ************************************ 00:32:21.293 22:40:20 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:32:21.293 22:40:20 -- bdev/blockdev.sh@298 -- # uname -s 00:32:21.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:21.293 22:40:20 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:32:21.293 22:40:20 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:21.293 22:40:20 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:21.293 22:40:20 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:32:21.293 22:40:20 -- bdev/blockdev.sh@302 -- # local bdev_all 00:32:21.293 22:40:20 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:32:21.293 22:40:20 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:32:21.293 22:40:20 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:32:21.293 22:40:20 -- bdev/blockdev.sh@309 -- # local nbd_all 00:32:21.293 22:40:20 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:32:21.293 22:40:20 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:32:21.293 22:40:20 -- bdev/blockdev.sh@312 -- # local nbd_list 00:32:21.293 22:40:20 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:32:21.293 22:40:20 -- bdev/blockdev.sh@313 -- # local bdev_list 00:32:21.293 22:40:20 -- bdev/blockdev.sh@316 -- # nbd_pid=145491 00:32:21.293 22:40:20 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:21.293 22:40:20 -- bdev/blockdev.sh@318 -- # waitforlisten 145491 /var/tmp/spdk-nbd.sock 00:32:21.293 22:40:20 -- common/autotest_common.sh@819 -- # '[' -z 145491 ']' 00:32:21.293 22:40:20 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:21.293 22:40:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:21.293 22:40:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:21.293 22:40:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:21.293 22:40:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:21.293 22:40:20 -- common/autotest_common.sh@10 -- # set +x 00:32:21.293 [2024-07-14 22:40:20.150951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:21.293 [2024-07-14 22:40:20.152130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.293 [2024-07-14 22:40:20.327828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.552 [2024-07-14 22:40:20.517428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.119 22:40:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:22.119 22:40:21 -- common/autotest_common.sh@852 -- # return 0 00:32:22.119 22:40:21 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@24 -- # local i 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:22.119 22:40:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:32:22.378 22:40:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:22.378 22:40:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:22.378 22:40:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:22.378 22:40:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:32:22.378 22:40:21 -- common/autotest_common.sh@857 -- # local i 00:32:22.378 22:40:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:22.378 22:40:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:22.378 22:40:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:32:22.378 22:40:21 -- common/autotest_common.sh@861 -- # break 00:32:22.378 22:40:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:22.378 22:40:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:22.378 22:40:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:22.378 1+0 records in 00:32:22.378 1+0 records out 00:32:22.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715051 s, 5.7 MB/s 00:32:22.378 22:40:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:22.378 22:40:21 -- common/autotest_common.sh@874 -- # size=4096 00:32:22.378 22:40:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:22.378 22:40:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:22.378 22:40:21 -- common/autotest_common.sh@877 -- # return 0 00:32:22.378 22:40:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:22.378 22:40:21 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:22.378 22:40:21 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:22.638 { 00:32:22.638 "nbd_device": "/dev/nbd0", 00:32:22.638 "bdev_name": "raid5f" 00:32:22.638 } 00:32:22.638 ]' 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:22.638 { 00:32:22.638 "nbd_device": "/dev/nbd0", 00:32:22.638 "bdev_name": "raid5f" 00:32:22.638 } 00:32:22.638 ]' 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@51 -- # local i 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:22.638 22:40:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@41 -- # break 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@45 -- # return 0 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:22.897 22:40:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:23.155 22:40:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:23.155 22:40:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:23.155 22:40:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@65 -- # true 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@65 -- # count=0 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@122 -- # count=0 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@127 -- # return 0 00:32:23.412 22:40:22 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@12 -- # local i 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:23.412 22:40:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:32:23.670 /dev/nbd0 00:32:23.670 22:40:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:23.671 22:40:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:23.671 22:40:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:32:23.671 22:40:22 -- common/autotest_common.sh@857 -- # local i 00:32:23.671 22:40:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:23.671 22:40:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:23.671 22:40:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:32:23.671 22:40:22 -- common/autotest_common.sh@861 -- # break 00:32:23.671 22:40:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:23.671 22:40:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:23.671 22:40:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:23.671 1+0 records in 00:32:23.671 1+0 records out 00:32:23.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623602 s, 6.6 MB/s 00:32:23.671 22:40:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:23.671 22:40:22 -- common/autotest_common.sh@874 -- # size=4096 00:32:23.671 22:40:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:23.671 22:40:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:23.671 22:40:22 -- common/autotest_common.sh@877 -- # return 0 00:32:23.671 22:40:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:23.671 22:40:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:23.671 22:40:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:23.671 22:40:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:23.671 22:40:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:23.929 { 00:32:23.929 "nbd_device": "/dev/nbd0", 00:32:23.929 "bdev_name": "raid5f" 00:32:23.929 } 00:32:23.929 ]' 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:23.929 { 00:32:23.929 "nbd_device": "/dev/nbd0", 00:32:23.929 "bdev_name": "raid5f" 00:32:23.929 } 00:32:23.929 ]' 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@65 -- # count=1 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@66 -- # echo 1 00:32:23.929 22:40:22 -- bdev/nbd_common.sh@95 -- # count=1 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:23.930 256+0 records in 00:32:23.930 256+0 records out 00:32:23.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621849 s, 169 MB/s 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:23.930 256+0 records in 00:32:23.930 256+0 records out 00:32:23.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324948 s, 32.3 MB/s 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@51 -- # local i 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:23.930 22:40:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@41 -- # break 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@45 -- # return 0 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:24.188 22:40:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:24.446 22:40:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:24.446 22:40:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:24.446 22:40:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@65 -- # true 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@65 -- # count=0 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@104 -- # count=0 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@109 -- # return 0 00:32:24.704 22:40:23 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:24.704 22:40:23 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:24.704 malloc_lvol_verify 00:32:24.962 22:40:23 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:24.962 0026bfc7-23ad-4598-aac2-a27c2f705263 00:32:24.962 22:40:24 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:25.219 82854d63-7eaa-4611-b6c5-12882d0f4b57 00:32:25.219 22:40:24 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:25.477 /dev/nbd0 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:25.477 mke2fs 1.45.5 (07-Jan-2020) 00:32:25.477 00:32:25.477 Filesystem too small for a journal 00:32:25.477 Creating filesystem with 1024 4k blocks and 1024 inodes 00:32:25.477 00:32:25.477 Allocating group tables: 0/1 done 00:32:25.477 Writing inode tables: 0/1 done 00:32:25.477 Writing superblocks and filesystem accounting information: 0/1 done 00:32:25.477 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@51 -- # local i 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:25.477 22:40:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:25.735 22:40:24 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@41 -- # break 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@45 -- # return 0 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:25.992 22:40:24 -- bdev/nbd_common.sh@147 -- # return 0 00:32:25.992 22:40:24 -- bdev/blockdev.sh@324 -- # killprocess 145491 00:32:25.992 22:40:24 -- common/autotest_common.sh@926 -- # '[' -z 145491 ']' 00:32:25.992 22:40:24 -- common/autotest_common.sh@930 -- # kill -0 145491 00:32:25.992 22:40:24 -- common/autotest_common.sh@931 -- # uname 00:32:25.992 22:40:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:25.992 22:40:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145491 00:32:25.992 22:40:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:25.992 22:40:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:25.992 22:40:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145491' 00:32:25.992 killing process with pid 145491 00:32:25.992 22:40:24 -- common/autotest_common.sh@945 -- # kill 145491 00:32:25.992 22:40:24 -- common/autotest_common.sh@950 -- # wait 145491 00:32:27.367 ************************************ 00:32:27.368 END TEST bdev_nbd 00:32:27.368 ************************************ 00:32:27.368 22:40:26 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:32:27.368 00:32:27.368 real 0m6.255s 00:32:27.368 user 0m8.991s 00:32:27.368 sys 0m1.041s 00:32:27.368 22:40:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.368 22:40:26 -- common/autotest_common.sh@10 -- # set +x 00:32:27.368 22:40:26 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:32:27.368 22:40:26 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:32:27.368 22:40:26 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:32:27.368 22:40:26 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:27.368 22:40:26 -- common/autotest_common.sh@10 -- # set +x 00:32:27.368 ************************************ 00:32:27.368 START TEST bdev_fio 00:32:27.368 ************************************ 00:32:27.368 22:40:26 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:32:27.368 22:40:26 -- bdev/blockdev.sh@329 -- # local env_context 00:32:27.368 22:40:26 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:32:27.368 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:32:27.368 22:40:26 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:27.368 22:40:26 -- bdev/blockdev.sh@337 -- # echo '' 00:32:27.368 22:40:26 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:32:27.368 22:40:26 -- bdev/blockdev.sh@337 -- # env_context= 00:32:27.368 22:40:26 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:27.368 22:40:26 -- common/autotest_common.sh@1260 -- # local workload=verify 00:32:27.368 22:40:26 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:32:27.368 22:40:26 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:27.368 22:40:26 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:27.368 22:40:26 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:27.368 22:40:26 -- common/autotest_common.sh@1280 -- # cat 00:32:27.368 22:40:26 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1293 -- # cat 00:32:27.368 22:40:26 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:32:27.368 22:40:26 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:32:27.368 22:40:26 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:27.368 22:40:26 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:32:27.368 22:40:26 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:27.368 22:40:26 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:32:27.641 22:40:26 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:32:27.641 22:40:26 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:32:27.641 22:40:26 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:27.641 22:40:26 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:27.641 22:40:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:27.641 22:40:26 -- common/autotest_common.sh@10 -- # set +x 00:32:27.641 ************************************ 00:32:27.641 START TEST bdev_fio_rw_verify 00:32:27.641 ************************************ 00:32:27.641 22:40:26 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:27.641 22:40:26 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:27.641 22:40:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:27.641 22:40:26 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:32:27.641 22:40:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:27.641 22:40:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:27.641 22:40:26 -- common/autotest_common.sh@1320 -- # shift 00:32:27.641 22:40:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:27.641 22:40:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.641 22:40:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:27.641 22:40:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:27.641 22:40:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:27.641 22:40:26 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:32:27.641 22:40:26 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:32:27.641 22:40:26 -- common/autotest_common.sh@1326 -- # break 00:32:27.641 22:40:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:27.641 22:40:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:27.641 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:27.641 fio-3.35 00:32:27.641 Starting 1 thread 00:32:39.907 00:32:39.907 job_raid5f: (groupid=0, jobs=1): err= 0: pid=145755: Sun Jul 14 22:40:37 2024 00:32:39.907 read: IOPS=9676, BW=37.8MiB/s (39.6MB/s)(378MiB/10001msec) 00:32:39.907 slat (usec): min=20, max=545, avg=24.80, stdev= 6.19 00:32:39.907 clat (usec): min=13, max=1087, avg=163.22, stdev=63.93 00:32:39.907 lat (usec): min=37, max=1119, avg=188.02, stdev=65.46 00:32:39.907 clat percentiles (usec): 00:32:39.907 | 50.000th=[ 163], 99.000th=[ 318], 99.900th=[ 482], 99.990th=[ 955], 00:32:39.907 | 99.999th=[ 1090] 00:32:39.907 write: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(392MiB/9881msec); 0 zone resets 00:32:39.907 slat (usec): min=10, max=836, avg=21.50, stdev= 7.06 00:32:39.907 clat (usec): min=73, max=3498, avg=376.61, stdev=76.24 00:32:39.907 lat (usec): min=92, max=3526, avg=398.10, stdev=79.09 00:32:39.907 clat percentiles (usec): 00:32:39.907 | 50.000th=[ 371], 99.000th=[ 578], 99.900th=[ 1004], 99.990th=[ 1582], 00:32:39.907 | 99.999th=[ 3458] 00:32:39.907 bw ( KiB/s): min=28224, max=44728, per=99.02%, avg=40236.63, stdev=3865.22, samples=19 00:32:39.907 iops : min= 7056, max=11182, avg=10059.16, stdev=966.31, samples=19 00:32:39.907 lat (usec) : 20=0.01%, 50=0.01%, 100=10.60%, 250=35.27%, 500=51.83% 00:32:39.907 lat (usec) : 750=2.04%, 1000=0.20% 00:32:39.907 lat (msec) : 2=0.05%, 4=0.01% 00:32:39.907 cpu : usr=99.02%, sys=0.81%, ctx=189, majf=0, minf=6891 00:32:39.907 IO depths : 1=7.6%, 2=20.0%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.907 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.907 issued rwts: total=96775,100379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.907 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:39.907 00:32:39.907 Run status group 0 (all jobs): 00:32:39.907 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=378MiB (396MB), run=10001-10001msec 00:32:39.907 WRITE: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=392MiB (411MB), run=9881-9881msec 00:32:40.166 ----------------------------------------------------- 00:32:40.166 Suppressions used: 00:32:40.166 count bytes template 00:32:40.166 1 7 /usr/src/fio/parse.c 00:32:40.166 592 56832 /usr/src/fio/iolog.c 00:32:40.166 2 596 libcrypto.so 00:32:40.166 ----------------------------------------------------- 00:32:40.166 00:32:40.166 ************************************ 00:32:40.166 END TEST bdev_fio_rw_verify 00:32:40.166 ************************************ 00:32:40.166 00:32:40.166 real 0m12.643s 00:32:40.166 user 0m13.079s 00:32:40.166 sys 0m0.750s 00:32:40.166 22:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.166 22:40:39 -- common/autotest_common.sh@10 -- # set +x 00:32:40.166 22:40:39 -- bdev/blockdev.sh@348 -- # rm -f 00:32:40.166 22:40:39 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:40.166 22:40:39 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:32:40.166 22:40:39 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:40.166 22:40:39 -- common/autotest_common.sh@1260 -- # local workload=trim 00:32:40.166 22:40:39 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:32:40.166 22:40:39 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:40.166 22:40:39 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:40.166 22:40:39 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:40.166 22:40:39 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:32:40.166 22:40:39 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:40.166 22:40:39 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:40.166 22:40:39 -- common/autotest_common.sh@1280 -- # cat 00:32:40.166 22:40:39 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:32:40.166 22:40:39 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:32:40.166 22:40:39 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:32:40.166 22:40:39 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:40.166 22:40:39 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "771d1efb-3a9a-41ac-9569-c95b13d9d59b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "771d1efb-3a9a-41ac-9569-c95b13d9d59b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "771d1efb-3a9a-41ac-9569-c95b13d9d59b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "50146396-c77d-4236-868d-020343ff74fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "68cba21e-48f7-4eab-ac04-73e35d29c58f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "851f0bd6-8109-4e7b-840c-85a216098f02",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:32:40.166 22:40:39 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:32:40.166 22:40:39 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:40.166 22:40:39 -- bdev/blockdev.sh@360 -- # popd 00:32:40.166 /home/vagrant/spdk_repo/spdk 00:32:40.166 22:40:39 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:32:40.166 22:40:39 -- bdev/blockdev.sh@362 -- # return 0 00:32:40.166 00:32:40.166 real 0m12.834s 00:32:40.166 user 0m13.200s 00:32:40.166 sys 0m0.810s 00:32:40.166 22:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.166 22:40:39 -- common/autotest_common.sh@10 -- # set +x 00:32:40.166 ************************************ 00:32:40.166 END TEST bdev_fio 00:32:40.166 ************************************ 00:32:40.425 22:40:39 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:40.425 22:40:39 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:40.425 22:40:39 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:40.425 22:40:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:40.425 22:40:39 -- common/autotest_common.sh@10 -- # set +x 00:32:40.425 ************************************ 00:32:40.425 START TEST bdev_verify 00:32:40.425 ************************************ 00:32:40.425 22:40:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:40.425 [2024-07-14 22:40:39.360728] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:40.425 [2024-07-14 22:40:39.361271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145942 ] 00:32:40.684 [2024-07-14 22:40:39.535462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:40.684 [2024-07-14 22:40:39.730202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.684 [2024-07-14 22:40:39.730203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.250 Running I/O for 5 seconds... 00:32:46.512 00:32:46.512 Latency(us) 00:32:46.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.512 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:46.512 Verification LBA range: start 0x0 length 0x2000 00:32:46.512 raid5f : 5.02 7438.63 29.06 0.00 0.00 27277.44 237.38 20375.74 00:32:46.513 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:46.513 Verification LBA range: start 0x2000 length 0x2000 00:32:46.513 raid5f : 5.01 7821.54 30.55 0.00 0.00 25940.59 325.82 20614.05 00:32:46.513 =================================================================================================================== 00:32:46.513 Total : 15260.17 59.61 0.00 0.00 26592.40 237.38 20614.05 00:32:47.445 ************************************ 00:32:47.445 END TEST bdev_verify 00:32:47.445 ************************************ 00:32:47.445 00:32:47.445 real 0m7.228s 00:32:47.445 user 0m13.201s 00:32:47.445 sys 0m0.308s 00:32:47.445 22:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:47.445 22:40:46 -- common/autotest_common.sh@10 -- # set +x 00:32:47.702 22:40:46 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:47.702 22:40:46 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:47.702 22:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:47.702 22:40:46 -- common/autotest_common.sh@10 -- # set +x 00:32:47.702 ************************************ 00:32:47.702 START TEST bdev_verify_big_io 00:32:47.702 ************************************ 00:32:47.702 22:40:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:47.702 [2024-07-14 22:40:46.637691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:47.702 [2024-07-14 22:40:46.638213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146049 ] 00:32:47.959 [2024-07-14 22:40:46.812461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.959 [2024-07-14 22:40:47.006770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.959 [2024-07-14 22:40:47.006777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.525 Running I/O for 5 seconds... 00:32:53.791 00:32:53.791 Latency(us) 00:32:53.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.791 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:53.791 Verification LBA range: start 0x0 length 0x200 00:32:53.791 raid5f : 5.18 541.87 33.87 0.00 0.00 6144043.57 173.15 197322.94 00:32:53.791 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:53.791 Verification LBA range: start 0x200 length 0x200 00:32:53.791 raid5f : 5.19 533.43 33.34 0.00 0.00 6239928.76 194.56 198276.19 00:32:53.791 =================================================================================================================== 00:32:53.791 Total : 1075.29 67.21 0.00 0.00 6191676.74 173.15 198276.19 00:32:55.163 ************************************ 00:32:55.163 END TEST bdev_verify_big_io 00:32:55.163 ************************************ 00:32:55.163 00:32:55.163 real 0m7.417s 00:32:55.163 user 0m13.629s 00:32:55.163 sys 0m0.272s 00:32:55.163 22:40:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.163 22:40:53 -- common/autotest_common.sh@10 -- # set +x 00:32:55.163 22:40:54 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:55.163 22:40:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:55.163 22:40:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:55.163 22:40:54 -- common/autotest_common.sh@10 -- # set +x 00:32:55.163 ************************************ 00:32:55.163 START TEST bdev_write_zeroes 00:32:55.163 ************************************ 00:32:55.163 22:40:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:55.163 [2024-07-14 22:40:54.091324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:55.163 [2024-07-14 22:40:54.091802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146171 ] 00:32:55.421 [2024-07-14 22:40:54.246736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.421 [2024-07-14 22:40:54.441397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.987 Running I/O for 1 seconds... 00:32:56.920 00:32:56.920 Latency(us) 00:32:56.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.920 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:56.920 raid5f : 1.00 25491.60 99.58 0.00 0.00 5005.91 1467.11 5659.93 00:32:56.920 =================================================================================================================== 00:32:56.920 Total : 25491.60 99.58 0.00 0.00 5005.91 1467.11 5659.93 00:32:58.295 ************************************ 00:32:58.295 END TEST bdev_write_zeroes 00:32:58.295 ************************************ 00:32:58.295 00:32:58.295 real 0m3.115s 00:32:58.295 user 0m2.774s 00:32:58.295 sys 0m0.225s 00:32:58.295 22:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.295 22:40:57 -- common/autotest_common.sh@10 -- # set +x 00:32:58.295 22:40:57 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.295 22:40:57 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:58.295 22:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:58.295 22:40:57 -- common/autotest_common.sh@10 -- # set +x 00:32:58.295 ************************************ 00:32:58.295 START TEST bdev_json_nonenclosed 00:32:58.295 ************************************ 00:32:58.295 22:40:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.295 [2024-07-14 22:40:57.278079] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:58.295 [2024-07-14 22:40:57.278642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146228 ] 00:32:58.553 [2024-07-14 22:40:57.445658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.811 [2024-07-14 22:40:57.637228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.811 [2024-07-14 22:40:57.637717] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:58.811 [2024-07-14 22:40:57.637867] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:59.069 ************************************ 00:32:59.069 END TEST bdev_json_nonenclosed 00:32:59.069 ************************************ 00:32:59.069 00:32:59.069 real 0m0.803s 00:32:59.069 user 0m0.561s 00:32:59.069 sys 0m0.140s 00:32:59.069 22:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.069 22:40:58 -- common/autotest_common.sh@10 -- # set +x 00:32:59.069 22:40:58 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:59.069 22:40:58 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:59.069 22:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:59.069 22:40:58 -- common/autotest_common.sh@10 -- # set +x 00:32:59.069 ************************************ 00:32:59.069 START TEST bdev_json_nonarray 00:32:59.069 ************************************ 00:32:59.069 22:40:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:59.069 [2024-07-14 22:40:58.132547] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:59.069 [2024-07-14 22:40:58.133132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146285 ] 00:32:59.327 [2024-07-14 22:40:58.306431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.584 [2024-07-14 22:40:58.501624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.585 [2024-07-14 22:40:58.502130] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:59.585 [2024-07-14 22:40:58.502302] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:59.843 00:32:59.843 real 0m0.814s 00:32:59.843 user 0m0.577s 00:32:59.843 sys 0m0.133s 00:32:59.843 22:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.843 22:40:58 -- common/autotest_common.sh@10 -- # set +x 00:32:59.843 ************************************ 00:32:59.843 END TEST bdev_json_nonarray 00:32:59.843 ************************************ 00:33:00.102 22:40:58 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:33:00.102 22:40:58 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:33:00.102 22:40:58 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:33:00.102 22:40:58 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:33:00.102 22:40:58 -- bdev/blockdev.sh@809 -- # cleanup 00:33:00.102 22:40:58 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:00.102 22:40:58 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:00.102 22:40:58 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:33:00.102 22:40:58 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:33:00.102 22:40:58 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:33:00.102 22:40:58 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:33:00.102 ************************************ 00:33:00.102 END TEST blockdev_raid5f 00:33:00.102 ************************************ 00:33:00.102 00:33:00.102 real 0m48.075s 00:33:00.102 user 1m5.621s 00:33:00.102 sys 0m4.438s 00:33:00.102 22:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:00.102 22:40:58 -- common/autotest_common.sh@10 -- # set +x 00:33:00.102 22:40:58 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:33:00.102 22:40:58 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:33:00.102 22:40:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:00.102 22:40:58 -- common/autotest_common.sh@10 -- # set +x 00:33:00.102 22:40:58 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:33:00.102 22:40:58 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:33:00.102 22:40:58 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:33:00.102 22:40:58 -- common/autotest_common.sh@10 -- # set +x 00:33:01.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:01.477 Waiting for block devices as requested 00:33:01.477 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:33:02.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:02.070 Cleaning 00:33:02.070 Removing: /var/run/dpdk/spdk0/config 00:33:02.070 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:02.070 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:02.070 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:02.070 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:02.070 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:02.070 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:02.070 Removing: /dev/shm/spdk_tgt_trace.pid105050 00:33:02.070 Removing: /var/run/dpdk/spdk0 00:33:02.070 Removing: /var/run/dpdk/spdk_pid104776 00:33:02.070 Removing: /var/run/dpdk/spdk_pid105050 00:33:02.070 Removing: /var/run/dpdk/spdk_pid105342 00:33:02.070 Removing: /var/run/dpdk/spdk_pid105612 00:33:02.070 Removing: /var/run/dpdk/spdk_pid105815 00:33:02.070 Removing: /var/run/dpdk/spdk_pid105924 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106028 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106169 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106271 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106324 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106391 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106462 00:33:02.070 Removing: /var/run/dpdk/spdk_pid106573 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107146 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107252 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107325 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107355 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107508 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107531 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107686 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107716 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107778 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107810 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107899 00:33:02.070 Removing: /var/run/dpdk/spdk_pid107924 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108113 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108178 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108226 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108304 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108402 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108441 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108556 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108590 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108643 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108676 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108749 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108783 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108830 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108871 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108918 00:33:02.070 Removing: /var/run/dpdk/spdk_pid108972 00:33:02.070 Removing: /var/run/dpdk/spdk_pid109027 00:33:02.070 Removing: /var/run/dpdk/spdk_pid109065 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109112 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109164 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109218 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109252 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109299 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109344 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109408 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109440 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109494 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109528 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109592 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109626 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109681 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109719 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109784 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109827 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109874 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109908 00:33:02.329 Removing: /var/run/dpdk/spdk_pid109962 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110015 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110066 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110110 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110160 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110224 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110274 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110313 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110367 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110419 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110466 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110556 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110686 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110895 00:33:02.329 Removing: /var/run/dpdk/spdk_pid110980 00:33:02.329 Removing: /var/run/dpdk/spdk_pid111067 00:33:02.329 Removing: /var/run/dpdk/spdk_pid112394 00:33:02.329 Removing: /var/run/dpdk/spdk_pid112638 00:33:02.329 Removing: /var/run/dpdk/spdk_pid112865 00:33:02.329 Removing: /var/run/dpdk/spdk_pid113007 00:33:02.330 Removing: /var/run/dpdk/spdk_pid113168 00:33:02.330 Removing: /var/run/dpdk/spdk_pid113255 00:33:02.330 Removing: /var/run/dpdk/spdk_pid113293 00:33:02.330 Removing: /var/run/dpdk/spdk_pid113331 00:33:02.330 Removing: /var/run/dpdk/spdk_pid113847 00:33:02.330 Removing: /var/run/dpdk/spdk_pid113941 00:33:02.330 Removing: /var/run/dpdk/spdk_pid114081 00:33:02.330 Removing: /var/run/dpdk/spdk_pid114145 00:33:02.330 Removing: /var/run/dpdk/spdk_pid115401 00:33:02.330 Removing: /var/run/dpdk/spdk_pid116356 00:33:02.330 Removing: /var/run/dpdk/spdk_pid117318 00:33:02.330 Removing: /var/run/dpdk/spdk_pid118494 00:33:02.330 Removing: /var/run/dpdk/spdk_pid119637 00:33:02.330 Removing: /var/run/dpdk/spdk_pid120748 00:33:02.330 Removing: /var/run/dpdk/spdk_pid122317 00:33:02.330 Removing: /var/run/dpdk/spdk_pid123558 00:33:02.330 Removing: /var/run/dpdk/spdk_pid124816 00:33:02.330 Removing: /var/run/dpdk/spdk_pid125526 00:33:02.330 Removing: /var/run/dpdk/spdk_pid126139 00:33:02.330 Removing: /var/run/dpdk/spdk_pid126806 00:33:02.330 Removing: /var/run/dpdk/spdk_pid127340 00:33:02.330 Removing: /var/run/dpdk/spdk_pid127952 00:33:02.330 Removing: /var/run/dpdk/spdk_pid128546 00:33:02.330 Removing: /var/run/dpdk/spdk_pid129269 00:33:02.330 Removing: /var/run/dpdk/spdk_pid129820 00:33:02.330 Removing: /var/run/dpdk/spdk_pid131276 00:33:02.330 Removing: /var/run/dpdk/spdk_pid131888 00:33:02.330 Removing: /var/run/dpdk/spdk_pid132456 00:33:02.330 Removing: /var/run/dpdk/spdk_pid134097 00:33:02.330 Removing: /var/run/dpdk/spdk_pid134803 00:33:02.330 Removing: /var/run/dpdk/spdk_pid135458 00:33:02.330 Removing: /var/run/dpdk/spdk_pid136284 00:33:02.330 Removing: /var/run/dpdk/spdk_pid136358 00:33:02.330 Removing: /var/run/dpdk/spdk_pid136413 00:33:02.330 Removing: /var/run/dpdk/spdk_pid136486 00:33:02.330 Removing: /var/run/dpdk/spdk_pid136613 00:33:02.330 Removing: /var/run/dpdk/spdk_pid136787 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137005 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137307 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137322 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137382 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137405 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137437 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137468 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137508 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137547 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137579 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137607 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137647 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137695 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137715 00:33:02.330 Removing: /var/run/dpdk/spdk_pid137743 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137775 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137802 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137827 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137870 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137901 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137923 00:33:02.589 Removing: /var/run/dpdk/spdk_pid137970 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138001 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138041 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138131 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138179 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138204 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138250 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138271 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138293 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138379 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138399 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138442 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138467 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138492 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138509 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138548 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138577 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138594 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138618 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138662 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138709 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138747 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138792 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138817 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138840 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138897 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138923 00:33:02.589 Removing: /var/run/dpdk/spdk_pid138988 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139008 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139032 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139049 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139077 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139095 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139119 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139154 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139248 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139336 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139494 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139523 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139577 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139658 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139695 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139725 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139754 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139823 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139845 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139934 00:33:02.589 Removing: /var/run/dpdk/spdk_pid139993 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140045 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140330 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140455 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140502 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140598 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140690 00:33:02.589 Removing: /var/run/dpdk/spdk_pid140757 00:33:02.589 Removing: /var/run/dpdk/spdk_pid141012 00:33:02.589 Removing: /var/run/dpdk/spdk_pid141197 00:33:02.589 Removing: /var/run/dpdk/spdk_pid141303 00:33:02.589 Removing: /var/run/dpdk/spdk_pid141381 00:33:02.589 Removing: /var/run/dpdk/spdk_pid141419 00:33:02.589 Removing: /var/run/dpdk/spdk_pid141502 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142091 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142135 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142479 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142586 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142710 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142760 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142798 00:33:02.589 Removing: /var/run/dpdk/spdk_pid142853 00:33:02.589 Removing: /var/run/dpdk/spdk_pid144298 00:33:02.589 Removing: /var/run/dpdk/spdk_pid144447 00:33:02.589 Removing: /var/run/dpdk/spdk_pid144451 00:33:02.589 Removing: /var/run/dpdk/spdk_pid144468 00:33:02.589 Removing: /var/run/dpdk/spdk_pid144990 00:33:02.589 Removing: /var/run/dpdk/spdk_pid145102 00:33:02.589 Removing: /var/run/dpdk/spdk_pid145276 00:33:02.589 Removing: /var/run/dpdk/spdk_pid145356 00:33:02.589 Removing: /var/run/dpdk/spdk_pid145422 00:33:02.589 Removing: /var/run/dpdk/spdk_pid145724 00:33:02.589 Removing: /var/run/dpdk/spdk_pid145942 00:33:02.589 Removing: /var/run/dpdk/spdk_pid146049 00:33:02.589 Removing: /var/run/dpdk/spdk_pid146171 00:33:02.589 Removing: /var/run/dpdk/spdk_pid146228 00:33:02.589 Removing: /var/run/dpdk/spdk_pid146285 00:33:02.589 Clean 00:33:02.848 killing process with pid 93873 00:33:02.848 killing process with pid 93946 00:33:02.848 22:41:01 -- common/autotest_common.sh@1436 -- # return 0 00:33:02.848 22:41:01 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:33:02.848 22:41:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:02.848 22:41:01 -- common/autotest_common.sh@10 -- # set +x 00:33:02.848 22:41:01 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:33:02.848 22:41:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:02.848 22:41:01 -- common/autotest_common.sh@10 -- # set +x 00:33:02.848 22:41:01 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:02.848 22:41:01 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:02.848 22:41:01 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:02.848 22:41:01 -- spdk/autotest.sh@394 -- # hash lcov 00:33:02.848 22:41:01 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:02.848 22:41:01 -- spdk/autotest.sh@396 -- # hostname 00:33:02.848 22:41:01 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:03.107 geninfo: WARNING: invalid characters removed from testname! 00:33:49.777 22:41:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:51.156 22:41:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:54.441 22:41:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:57.726 22:41:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.011 22:41:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.327 22:42:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.612 22:42:06 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:07.612 22:42:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:07.612 22:42:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:07.612 22:42:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.612 22:42:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.612 22:42:06 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:07.612 22:42:06 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:07.612 22:42:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:07.612 22:42:06 -- paths/export.sh@5 -- $ export PATH 00:34:07.612 22:42:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:07.612 22:42:06 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:07.612 22:42:06 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:07.612 22:42:06 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720996926.XXXXXX 00:34:07.612 22:42:06 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720996926.Y9HHME 00:34:07.612 22:42:06 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:07.612 22:42:06 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:34:07.612 22:42:06 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:07.612 22:42:06 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:07.612 22:42:06 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:07.612 22:42:06 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:07.612 22:42:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:34:07.612 22:42:06 -- common/autotest_common.sh@10 -- $ set +x 00:34:07.612 22:42:06 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:34:07.612 22:42:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:07.612 22:42:06 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:07.612 22:42:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:07.612 22:42:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:07.612 22:42:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:07.612 22:42:06 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:34:07.612 22:42:06 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:34:07.612 22:42:06 -- common/autotest_common.sh@10 -- $ set +x 00:34:07.612 22:42:06 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:34:07.612 22:42:06 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:34:07.612 22:42:06 -- spdk/autopackage.sh@40 -- $ get_config_params 00:34:07.612 22:42:06 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:34:07.612 22:42:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:34:07.612 22:42:06 -- common/autotest_common.sh@10 -- $ set +x 00:34:07.612 22:42:06 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:34:07.612 22:42:06 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:34:07.871 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:07.871 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:34:08.130 Using 'verbs' RDMA provider 00:34:20.894 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:34:33.096 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:34:33.096 Creating mk/config.mk...done. 00:34:33.096 Creating mk/cc.flags.mk...done. 00:34:33.096 Type 'make' to build. 00:34:33.096 22:42:32 -- spdk/autopackage.sh@43 -- $ make -j10 00:34:33.354 make[1]: Nothing to be done for 'all'. 00:34:38.626 The Meson build system 00:34:38.626 Version: 1.4.0 00:34:38.626 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:34:38.626 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:34:38.626 Build type: native build 00:34:38.626 Program cat found: YES (/usr/bin/cat) 00:34:38.626 Project name: DPDK 00:34:38.626 Project version: 23.11.0 00:34:38.626 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:34:38.626 C linker for the host machine: cc ld.bfd 2.34 00:34:38.626 Host machine cpu family: x86_64 00:34:38.626 Host machine cpu: x86_64 00:34:38.626 Message: ## Building in Developer Mode ## 00:34:38.626 Program pkg-config found: YES (/usr/bin/pkg-config) 00:34:38.626 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:34:38.626 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:34:38.626 Program python3 found: YES (/usr/bin/python3) 00:34:38.626 Program cat found: YES (/usr/bin/cat) 00:34:38.626 Compiler for C supports arguments -march=native: YES 00:34:38.626 Checking for size of "void *" : 8 00:34:38.626 Checking for size of "void *" : 8 (cached) 00:34:38.626 Library m found: YES 00:34:38.626 Library numa found: YES 00:34:38.626 Has header "numaif.h" : YES 00:34:38.626 Library fdt found: NO 00:34:38.626 Library execinfo found: NO 00:34:38.626 Has header "execinfo.h" : YES 00:34:38.626 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:34:38.626 Run-time dependency libarchive found: NO (tried pkgconfig) 00:34:38.626 Run-time dependency libbsd found: NO (tried pkgconfig) 00:34:38.626 Run-time dependency jansson found: NO (tried pkgconfig) 00:34:38.626 Run-time dependency openssl found: YES 1.1.1f 00:34:38.626 Run-time dependency libpcap found: NO (tried pkgconfig) 00:34:38.626 Library pcap found: NO 00:34:38.626 Compiler for C supports arguments -Wcast-qual: YES 00:34:38.626 Compiler for C supports arguments -Wdeprecated: YES 00:34:38.626 Compiler for C supports arguments -Wformat: YES 00:34:38.626 Compiler for C supports arguments -Wformat-nonliteral: YES 00:34:38.626 Compiler for C supports arguments -Wformat-security: YES 00:34:38.626 Compiler for C supports arguments -Wmissing-declarations: YES 00:34:38.626 Compiler for C supports arguments -Wmissing-prototypes: YES 00:34:38.626 Compiler for C supports arguments -Wnested-externs: YES 00:34:38.626 Compiler for C supports arguments -Wold-style-definition: YES 00:34:38.626 Compiler for C supports arguments -Wpointer-arith: YES 00:34:38.626 Compiler for C supports arguments -Wsign-compare: YES 00:34:38.626 Compiler for C supports arguments -Wstrict-prototypes: YES 00:34:38.626 Compiler for C supports arguments -Wundef: YES 00:34:38.626 Compiler for C supports arguments -Wwrite-strings: YES 00:34:38.626 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:34:38.626 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:34:38.626 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:34:38.626 Program objdump found: YES (/usr/bin/objdump) 00:34:38.626 Compiler for C supports arguments -mavx512f: YES 00:34:38.626 Checking if "AVX512 checking" compiles: YES 00:34:38.626 Fetching value of define "__SSE4_2__" : 1 00:34:38.626 Fetching value of define "__AES__" : 1 00:34:38.626 Fetching value of define "__AVX__" : 1 00:34:38.626 Fetching value of define "__AVX2__" : 1 00:34:38.626 Fetching value of define "__AVX512BW__" : (undefined) 00:34:38.626 Fetching value of define "__AVX512CD__" : (undefined) 00:34:38.626 Fetching value of define "__AVX512DQ__" : (undefined) 00:34:38.626 Fetching value of define "__AVX512F__" : (undefined) 00:34:38.626 Fetching value of define "__AVX512VL__" : (undefined) 00:34:38.626 Fetching value of define "__PCLMUL__" : 1 00:34:38.626 Fetching value of define "__RDRND__" : 1 00:34:38.626 Fetching value of define "__RDSEED__" : 1 00:34:38.626 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:34:38.626 Fetching value of define "__znver1__" : (undefined) 00:34:38.626 Fetching value of define "__znver2__" : (undefined) 00:34:38.627 Fetching value of define "__znver3__" : (undefined) 00:34:38.627 Fetching value of define "__znver4__" : (undefined) 00:34:38.627 Compiler for C supports arguments -ffat-lto-objects: YES 00:34:38.627 Library asan found: YES 00:34:38.627 Compiler for C supports arguments -Wno-format-truncation: YES 00:34:38.627 Message: lib/log: Defining dependency "log" 00:34:38.627 Message: lib/kvargs: Defining dependency "kvargs" 00:34:38.627 Message: lib/telemetry: Defining dependency "telemetry" 00:34:38.627 Library rt found: YES 00:34:38.627 Checking for function "getentropy" : NO 00:34:38.627 Message: lib/eal: Defining dependency "eal" 00:34:38.627 Message: lib/ring: Defining dependency "ring" 00:34:38.627 Message: lib/rcu: Defining dependency "rcu" 00:34:38.627 Message: lib/mempool: Defining dependency "mempool" 00:34:38.627 Message: lib/mbuf: Defining dependency "mbuf" 00:34:38.627 Fetching value of define "__PCLMUL__" : 1 (cached) 00:34:38.627 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:34:38.627 Compiler for C supports arguments -mpclmul: YES 00:34:38.627 Compiler for C supports arguments -maes: YES 00:34:38.627 Compiler for C supports arguments -mavx512f: YES (cached) 00:34:38.627 Compiler for C supports arguments -mavx512bw: YES 00:34:38.627 Compiler for C supports arguments -mavx512dq: YES 00:34:38.627 Compiler for C supports arguments -mavx512vl: YES 00:34:38.627 Compiler for C supports arguments -mvpclmulqdq: YES 00:34:38.627 Compiler for C supports arguments -mavx2: YES 00:34:38.627 Compiler for C supports arguments -mavx: YES 00:34:38.627 Message: lib/net: Defining dependency "net" 00:34:38.627 Message: lib/meter: Defining dependency "meter" 00:34:38.627 Message: lib/ethdev: Defining dependency "ethdev" 00:34:38.627 Message: lib/pci: Defining dependency "pci" 00:34:38.627 Message: lib/cmdline: Defining dependency "cmdline" 00:34:38.627 Message: lib/hash: Defining dependency "hash" 00:34:38.627 Message: lib/timer: Defining dependency "timer" 00:34:38.627 Message: lib/compressdev: Defining dependency "compressdev" 00:34:38.627 Message: lib/cryptodev: Defining dependency "cryptodev" 00:34:38.627 Message: lib/dmadev: Defining dependency "dmadev" 00:34:38.627 Compiler for C supports arguments -Wno-cast-qual: YES 00:34:38.627 Message: lib/power: Defining dependency "power" 00:34:38.627 Message: lib/reorder: Defining dependency "reorder" 00:34:38.627 Message: lib/security: Defining dependency "security" 00:34:38.627 Has header "linux/userfaultfd.h" : YES 00:34:38.627 Has header "linux/vduse.h" : NO 00:34:38.627 Message: lib/vhost: Defining dependency "vhost" 00:34:38.627 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:34:38.627 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:34:38.627 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:34:38.627 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:34:38.627 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:34:38.627 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:34:38.627 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:34:38.627 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:34:38.627 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:34:38.627 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:34:38.627 Program doxygen found: YES (/usr/bin/doxygen) 00:34:38.627 Configuring doxy-api-html.conf using configuration 00:34:38.627 Configuring doxy-api-man.conf using configuration 00:34:38.627 Program mandb found: YES (/usr/bin/mandb) 00:34:38.627 Program sphinx-build found: NO 00:34:38.627 Configuring rte_build_config.h using configuration 00:34:38.627 Message: 00:34:38.627 ================= 00:34:38.627 Applications Enabled 00:34:38.627 ================= 00:34:38.627 00:34:38.627 apps: 00:34:38.627 00:34:38.627 00:34:38.627 Message: 00:34:38.627 ================= 00:34:38.627 Libraries Enabled 00:34:38.627 ================= 00:34:38.627 00:34:38.627 libs: 00:34:38.627 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:34:38.627 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:34:38.627 cryptodev, dmadev, power, reorder, security, vhost, 00:34:38.627 00:34:38.627 Message: 00:34:38.627 =============== 00:34:38.627 Drivers Enabled 00:34:38.627 =============== 00:34:38.627 00:34:38.627 common: 00:34:38.627 00:34:38.627 bus: 00:34:38.627 pci, vdev, 00:34:38.627 mempool: 00:34:38.627 ring, 00:34:38.627 dma: 00:34:38.627 00:34:38.627 net: 00:34:38.627 00:34:38.627 crypto: 00:34:38.627 00:34:38.627 compress: 00:34:38.627 00:34:38.627 vdpa: 00:34:38.627 00:34:38.627 00:34:38.627 Message: 00:34:38.627 ================= 00:34:38.627 Content Skipped 00:34:38.627 ================= 00:34:38.627 00:34:38.627 apps: 00:34:38.627 dumpcap: explicitly disabled via build config 00:34:38.627 graph: explicitly disabled via build config 00:34:38.627 pdump: explicitly disabled via build config 00:34:38.627 proc-info: explicitly disabled via build config 00:34:38.627 test-acl: explicitly disabled via build config 00:34:38.627 test-bbdev: explicitly disabled via build config 00:34:38.627 test-cmdline: explicitly disabled via build config 00:34:38.627 test-compress-perf: explicitly disabled via build config 00:34:38.627 test-crypto-perf: explicitly disabled via build config 00:34:38.627 test-dma-perf: explicitly disabled via build config 00:34:38.627 test-eventdev: explicitly disabled via build config 00:34:38.627 test-fib: explicitly disabled via build config 00:34:38.627 test-flow-perf: explicitly disabled via build config 00:34:38.627 test-gpudev: explicitly disabled via build config 00:34:38.627 test-mldev: explicitly disabled via build config 00:34:38.627 test-pipeline: explicitly disabled via build config 00:34:38.627 test-pmd: explicitly disabled via build config 00:34:38.627 test-regex: explicitly disabled via build config 00:34:38.627 test-sad: explicitly disabled via build config 00:34:38.627 test-security-perf: explicitly disabled via build config 00:34:38.627 00:34:38.627 libs: 00:34:38.627 metrics: explicitly disabled via build config 00:34:38.627 acl: explicitly disabled via build config 00:34:38.627 bbdev: explicitly disabled via build config 00:34:38.627 bitratestats: explicitly disabled via build config 00:34:38.627 bpf: explicitly disabled via build config 00:34:38.627 cfgfile: explicitly disabled via build config 00:34:38.627 distributor: explicitly disabled via build config 00:34:38.627 efd: explicitly disabled via build config 00:34:38.627 eventdev: explicitly disabled via build config 00:34:38.627 dispatcher: explicitly disabled via build config 00:34:38.627 gpudev: explicitly disabled via build config 00:34:38.627 gro: explicitly disabled via build config 00:34:38.627 gso: explicitly disabled via build config 00:34:38.627 ip_frag: explicitly disabled via build config 00:34:38.627 jobstats: explicitly disabled via build config 00:34:38.627 latencystats: explicitly disabled via build config 00:34:38.627 lpm: explicitly disabled via build config 00:34:38.627 member: explicitly disabled via build config 00:34:38.627 pcapng: explicitly disabled via build config 00:34:38.627 rawdev: explicitly disabled via build config 00:34:38.627 regexdev: explicitly disabled via build config 00:34:38.627 mldev: explicitly disabled via build config 00:34:38.627 rib: explicitly disabled via build config 00:34:38.627 sched: explicitly disabled via build config 00:34:38.627 stack: explicitly disabled via build config 00:34:38.627 ipsec: explicitly disabled via build config 00:34:38.627 pdcp: explicitly disabled via build config 00:34:38.627 fib: explicitly disabled via build config 00:34:38.627 port: explicitly disabled via build config 00:34:38.627 pdump: explicitly disabled via build config 00:34:38.627 table: explicitly disabled via build config 00:34:38.627 pipeline: explicitly disabled via build config 00:34:38.627 graph: explicitly disabled via build config 00:34:38.627 node: explicitly disabled via build config 00:34:38.627 00:34:38.627 drivers: 00:34:38.627 common/cpt: not in enabled drivers build config 00:34:38.627 common/dpaax: not in enabled drivers build config 00:34:38.627 common/iavf: not in enabled drivers build config 00:34:38.627 common/idpf: not in enabled drivers build config 00:34:38.627 common/mvep: not in enabled drivers build config 00:34:38.627 common/octeontx: not in enabled drivers build config 00:34:38.627 bus/auxiliary: not in enabled drivers build config 00:34:38.627 bus/cdx: not in enabled drivers build config 00:34:38.627 bus/dpaa: not in enabled drivers build config 00:34:38.627 bus/fslmc: not in enabled drivers build config 00:34:38.627 bus/ifpga: not in enabled drivers build config 00:34:38.627 bus/platform: not in enabled drivers build config 00:34:38.627 bus/vmbus: not in enabled drivers build config 00:34:38.627 common/cnxk: not in enabled drivers build config 00:34:38.627 common/mlx5: not in enabled drivers build config 00:34:38.627 common/nfp: not in enabled drivers build config 00:34:38.627 common/qat: not in enabled drivers build config 00:34:38.627 common/sfc_efx: not in enabled drivers build config 00:34:38.627 mempool/bucket: not in enabled drivers build config 00:34:38.627 mempool/cnxk: not in enabled drivers build config 00:34:38.627 mempool/dpaa: not in enabled drivers build config 00:34:38.627 mempool/dpaa2: not in enabled drivers build config 00:34:38.627 mempool/octeontx: not in enabled drivers build config 00:34:38.627 mempool/stack: not in enabled drivers build config 00:34:38.627 dma/cnxk: not in enabled drivers build config 00:34:38.627 dma/dpaa: not in enabled drivers build config 00:34:38.627 dma/dpaa2: not in enabled drivers build config 00:34:38.627 dma/hisilicon: not in enabled drivers build config 00:34:38.627 dma/idxd: not in enabled drivers build config 00:34:38.627 dma/ioat: not in enabled drivers build config 00:34:38.627 dma/skeleton: not in enabled drivers build config 00:34:38.627 net/af_packet: not in enabled drivers build config 00:34:38.627 net/af_xdp: not in enabled drivers build config 00:34:38.627 net/ark: not in enabled drivers build config 00:34:38.627 net/atlantic: not in enabled drivers build config 00:34:38.627 net/avp: not in enabled drivers build config 00:34:38.627 net/axgbe: not in enabled drivers build config 00:34:38.627 net/bnx2x: not in enabled drivers build config 00:34:38.627 net/bnxt: not in enabled drivers build config 00:34:38.627 net/bonding: not in enabled drivers build config 00:34:38.627 net/cnxk: not in enabled drivers build config 00:34:38.627 net/cpfl: not in enabled drivers build config 00:34:38.627 net/cxgbe: not in enabled drivers build config 00:34:38.627 net/dpaa: not in enabled drivers build config 00:34:38.627 net/dpaa2: not in enabled drivers build config 00:34:38.627 net/e1000: not in enabled drivers build config 00:34:38.627 net/ena: not in enabled drivers build config 00:34:38.627 net/enetc: not in enabled drivers build config 00:34:38.627 net/enetfec: not in enabled drivers build config 00:34:38.627 net/enic: not in enabled drivers build config 00:34:38.628 net/failsafe: not in enabled drivers build config 00:34:38.628 net/fm10k: not in enabled drivers build config 00:34:38.628 net/gve: not in enabled drivers build config 00:34:38.628 net/hinic: not in enabled drivers build config 00:34:38.628 net/hns3: not in enabled drivers build config 00:34:38.628 net/i40e: not in enabled drivers build config 00:34:38.628 net/iavf: not in enabled drivers build config 00:34:38.628 net/ice: not in enabled drivers build config 00:34:38.628 net/idpf: not in enabled drivers build config 00:34:38.628 net/igc: not in enabled drivers build config 00:34:38.628 net/ionic: not in enabled drivers build config 00:34:38.628 net/ipn3ke: not in enabled drivers build config 00:34:38.628 net/ixgbe: not in enabled drivers build config 00:34:38.628 net/mana: not in enabled drivers build config 00:34:38.628 net/memif: not in enabled drivers build config 00:34:38.628 net/mlx4: not in enabled drivers build config 00:34:38.628 net/mlx5: not in enabled drivers build config 00:34:38.628 net/mvneta: not in enabled drivers build config 00:34:38.628 net/mvpp2: not in enabled drivers build config 00:34:38.628 net/netvsc: not in enabled drivers build config 00:34:38.628 net/nfb: not in enabled drivers build config 00:34:38.628 net/nfp: not in enabled drivers build config 00:34:38.628 net/ngbe: not in enabled drivers build config 00:34:38.628 net/null: not in enabled drivers build config 00:34:38.628 net/octeontx: not in enabled drivers build config 00:34:38.628 net/octeon_ep: not in enabled drivers build config 00:34:38.628 net/pcap: not in enabled drivers build config 00:34:38.628 net/pfe: not in enabled drivers build config 00:34:38.628 net/qede: not in enabled drivers build config 00:34:38.628 net/ring: not in enabled drivers build config 00:34:38.628 net/sfc: not in enabled drivers build config 00:34:38.628 net/softnic: not in enabled drivers build config 00:34:38.628 net/tap: not in enabled drivers build config 00:34:38.628 net/thunderx: not in enabled drivers build config 00:34:38.628 net/txgbe: not in enabled drivers build config 00:34:38.628 net/vdev_netvsc: not in enabled drivers build config 00:34:38.628 net/vhost: not in enabled drivers build config 00:34:38.628 net/virtio: not in enabled drivers build config 00:34:38.628 net/vmxnet3: not in enabled drivers build config 00:34:38.628 raw/*: missing internal dependency, "rawdev" 00:34:38.628 crypto/armv8: not in enabled drivers build config 00:34:38.628 crypto/bcmfs: not in enabled drivers build config 00:34:38.628 crypto/caam_jr: not in enabled drivers build config 00:34:38.628 crypto/ccp: not in enabled drivers build config 00:34:38.628 crypto/cnxk: not in enabled drivers build config 00:34:38.628 crypto/dpaa_sec: not in enabled drivers build config 00:34:38.628 crypto/dpaa2_sec: not in enabled drivers build config 00:34:38.628 crypto/ipsec_mb: not in enabled drivers build config 00:34:38.628 crypto/mlx5: not in enabled drivers build config 00:34:38.628 crypto/mvsam: not in enabled drivers build config 00:34:38.628 crypto/nitrox: not in enabled drivers build config 00:34:38.628 crypto/null: not in enabled drivers build config 00:34:38.628 crypto/octeontx: not in enabled drivers build config 00:34:38.628 crypto/openssl: not in enabled drivers build config 00:34:38.628 crypto/scheduler: not in enabled drivers build config 00:34:38.628 crypto/uadk: not in enabled drivers build config 00:34:38.628 crypto/virtio: not in enabled drivers build config 00:34:38.628 compress/isal: not in enabled drivers build config 00:34:38.628 compress/mlx5: not in enabled drivers build config 00:34:38.628 compress/octeontx: not in enabled drivers build config 00:34:38.628 compress/zlib: not in enabled drivers build config 00:34:38.628 regex/*: missing internal dependency, "regexdev" 00:34:38.628 ml/*: missing internal dependency, "mldev" 00:34:38.628 vdpa/ifc: not in enabled drivers build config 00:34:38.628 vdpa/mlx5: not in enabled drivers build config 00:34:38.628 vdpa/nfp: not in enabled drivers build config 00:34:38.628 vdpa/sfc: not in enabled drivers build config 00:34:38.628 event/*: missing internal dependency, "eventdev" 00:34:38.628 baseband/*: missing internal dependency, "bbdev" 00:34:38.628 gpu/*: missing internal dependency, "gpudev" 00:34:38.628 00:34:38.628 00:34:38.888 Build targets in project: 85 00:34:38.888 00:34:38.888 DPDK 23.11.0 00:34:38.888 00:34:38.888 User defined options 00:34:38.888 default_library : static 00:34:38.888 libdir : lib 00:34:38.888 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:34:38.888 b_lto : true 00:34:38.888 b_sanitize : address 00:34:38.888 c_args : -fPIC -Werror 00:34:38.888 c_link_args : 00:34:38.888 cpu_instruction_set: native 00:34:38.888 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:34:38.888 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:34:38.888 enable_docs : false 00:34:38.888 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:34:38.888 enable_kmods : false 00:34:38.888 tests : false 00:34:38.888 00:34:38.888 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:34:39.457 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:34:39.457 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:34:39.717 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:34:39.717 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:34:39.717 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:34:39.717 [5/264] Linking static target lib/librte_kvargs.a 00:34:39.717 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:34:39.717 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:34:39.717 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:34:39.975 [9/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:34:39.975 [10/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:34:39.975 [11/264] Linking static target lib/librte_log.a 00:34:39.975 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:34:39.975 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:34:39.975 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:34:40.234 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:34:40.234 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:34:40.234 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:34:40.493 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:34:40.493 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:34:40.493 [20/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:34:40.493 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:34:40.493 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:34:40.493 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:34:40.752 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:34:40.752 [25/264] Linking target lib/librte_log.so.24.0 00:34:40.752 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:34:40.752 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:34:40.752 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:34:40.752 [29/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:34:41.011 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:34:41.011 [31/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:34:41.011 [32/264] Linking static target lib/librte_telemetry.a 00:34:41.011 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:34:41.011 [34/264] Linking target lib/librte_kvargs.so.24.0 00:34:41.011 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:34:41.011 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:34:41.011 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:34:41.011 [38/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:34:41.011 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:34:41.270 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:34:41.270 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:34:41.270 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:34:41.529 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:34:41.529 [44/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:34:41.529 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:34:41.529 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:34:41.788 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:34:41.788 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:34:41.788 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:34:41.788 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:34:41.788 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:34:42.046 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:34:42.046 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:34:42.046 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:34:42.046 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:34:42.046 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:34:42.046 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:34:42.046 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:34:42.313 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:34:42.314 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:34:42.314 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:34:42.314 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:34:42.314 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:34:42.314 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:34:42.601 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:34:42.601 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:34:42.601 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:34:42.601 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:34:42.601 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:34:42.870 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:34:42.870 [71/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:34:42.870 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:34:42.870 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:34:42.870 [74/264] Linking target lib/librte_telemetry.so.24.0 00:34:42.870 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:34:42.870 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:34:42.870 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:34:42.870 [78/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:34:43.127 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:34:43.127 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:34:43.127 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:34:43.386 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:34:43.386 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:34:43.386 [84/264] Linking static target lib/librte_ring.a 00:34:43.386 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:34:43.386 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:34:43.386 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:34:43.644 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:34:43.644 [89/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:34:43.644 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:34:43.902 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:34:43.902 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:34:43.902 [93/264] Linking static target lib/librte_eal.a 00:34:43.902 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:34:43.902 [95/264] Linking static target lib/librte_mempool.a 00:34:43.902 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:34:43.902 [97/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:34:44.161 [98/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:34:44.161 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:34:44.161 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:34:44.161 [101/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:34:44.161 [102/264] Linking static target lib/librte_rcu.a 00:34:44.419 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:34:44.419 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:34:44.419 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:34:44.420 [106/264] Linking static target lib/librte_net.a 00:34:44.420 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:34:44.420 [108/264] Linking static target lib/librte_meter.a 00:34:44.678 [109/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:34:44.678 [110/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:34:44.678 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:34:44.678 [112/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:34:44.678 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:34:44.936 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:34:44.936 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:34:45.194 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:34:45.194 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:34:45.760 [118/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:34:45.760 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:34:45.760 [120/264] Linking static target lib/librte_mbuf.a 00:34:45.760 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:34:45.760 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:34:46.018 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:34:46.018 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:34:46.018 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:34:46.018 [126/264] Linking static target lib/librte_pci.a 00:34:46.018 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:34:46.018 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:34:46.276 [129/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:46.276 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:34:46.276 [131/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:34:46.276 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:34:46.276 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:34:46.276 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:34:46.535 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:34:46.535 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:34:46.535 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:34:46.535 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:34:46.535 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:34:46.535 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:34:46.535 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:34:46.535 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:34:46.535 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:34:46.793 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:34:46.793 [145/264] Linking static target lib/librte_cmdline.a 00:34:47.051 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:34:47.051 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:34:47.309 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:34:47.309 [149/264] Linking static target lib/librte_timer.a 00:34:47.309 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:34:47.566 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:34:47.566 [152/264] Linking static target lib/librte_compressdev.a 00:34:47.566 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:34:47.566 [154/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:34:47.566 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:34:47.566 [156/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:34:47.827 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:34:47.827 [158/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:34:48.085 [159/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:34:48.085 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:34:48.085 [161/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:48.085 [162/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:34:48.085 [163/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:34:48.651 [164/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:34:48.651 [165/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:34:48.651 [166/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:34:48.651 [167/264] Linking static target lib/librte_dmadev.a 00:34:48.651 [168/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:34:48.651 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:34:48.908 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:34:48.908 [171/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:34:49.165 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:34:49.166 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:49.166 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:34:49.423 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:34:49.423 [176/264] Linking static target lib/librte_power.a 00:34:49.423 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:34:49.423 [178/264] Linking static target lib/librte_reorder.a 00:34:49.680 [179/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:34:49.680 [180/264] Linking static target lib/librte_security.a 00:34:49.680 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:34:49.681 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:34:49.939 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:34:49.939 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:34:49.939 [185/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:34:50.230 [186/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:34:50.488 [187/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:34:50.488 [188/264] Linking static target lib/librte_cryptodev.a 00:34:50.745 [189/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:34:50.745 [190/264] Linking static target lib/librte_ethdev.a 00:34:50.745 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:34:50.745 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:34:51.002 [193/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:34:51.258 [194/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:34:51.258 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:34:51.515 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:34:51.773 [197/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:34:51.773 [198/264] Linking static target lib/librte_hash.a 00:34:51.773 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:34:52.032 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:34:52.032 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:34:52.032 [202/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:52.290 [203/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:34:52.290 [204/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:34:52.290 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:34:52.550 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:34:52.550 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:34:52.550 [208/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:34:52.550 [209/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:34:52.550 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:52.550 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:52.550 [212/264] Linking static target drivers/librte_bus_vdev.a 00:34:52.550 [213/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:34:52.810 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:52.810 [215/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:52.810 [216/264] Linking static target drivers/librte_bus_pci.a 00:34:52.810 [217/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:53.069 [218/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:34:53.069 [219/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:34:53.069 [220/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:34:53.069 [221/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:53.069 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:53.069 [223/264] Linking static target drivers/librte_mempool_ring.a 00:34:53.069 [224/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:57.254 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:03.821 [226/264] Linking target lib/librte_eal.so.24.0 00:35:03.821 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:35:03.821 [228/264] Linking target lib/librte_meter.so.24.0 00:35:03.821 [229/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:35:03.821 [230/264] Linking target lib/librte_ring.so.24.0 00:35:03.821 [231/264] Linking target lib/librte_pci.so.24.0 00:35:03.821 [232/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:35:04.080 [233/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:35:04.080 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:35:04.080 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:35:04.339 [236/264] Linking target lib/librte_timer.so.24.0 00:35:04.339 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:35:04.339 [238/264] Linking target lib/librte_dmadev.so.24.0 00:35:04.339 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:35:04.906 [240/264] Linking target lib/librte_mempool.so.24.0 00:35:04.906 [241/264] Linking target lib/librte_rcu.so.24.0 00:35:04.906 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:35:05.171 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:35:05.446 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:35:05.715 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:35:07.090 [246/264] Linking target lib/librte_mbuf.so.24.0 00:35:07.348 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:35:07.916 [248/264] Linking target lib/librte_reorder.so.24.0 00:35:07.916 [249/264] Linking target lib/librte_compressdev.so.24.0 00:35:08.483 [250/264] Linking target lib/librte_net.so.24.0 00:35:08.483 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:35:09.858 [252/264] Linking target lib/librte_cmdline.so.24.0 00:35:10.116 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:35:10.116 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:35:10.684 [255/264] Linking target lib/librte_security.so.24.0 00:35:13.971 [256/264] Linking target lib/librte_hash.so.24.0 00:35:13.971 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:35:22.084 [258/264] Linking target lib/librte_ethdev.so.24.0 00:35:22.084 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:35:23.988 [260/264] Linking target lib/librte_power.so.24.0 00:35:30.545 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:35:30.545 [262/264] Linking static target lib/librte_vhost.a 00:35:31.480 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:36:27.722 [264/264] Linking target lib/librte_vhost.so.24.0 00:36:27.722 INFO: autodetecting backend as ninja 00:36:27.722 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:36:27.722 CC lib/log/log_flags.o 00:36:27.722 CC lib/log/log.o 00:36:27.722 CC lib/log/log_deprecated.o 00:36:27.722 CC lib/ut_mock/mock.o 00:36:27.722 CC lib/ut/ut.o 00:36:27.722 LIB libspdk_ut_mock.a 00:36:27.722 LIB libspdk_log.a 00:36:27.722 LIB libspdk_ut.a 00:36:27.722 CC lib/dma/dma.o 00:36:27.722 CC lib/ioat/ioat.o 00:36:27.722 CC lib/util/bit_array.o 00:36:27.722 CC lib/util/base64.o 00:36:27.722 CC lib/util/cpuset.o 00:36:27.722 CXX lib/trace_parser/trace.o 00:36:27.722 CC lib/util/crc16.o 00:36:27.722 CC lib/util/crc32.o 00:36:27.722 CC lib/util/crc32c.o 00:36:27.722 CC lib/vfio_user/host/vfio_user_pci.o 00:36:27.722 CC lib/util/crc32_ieee.o 00:36:27.722 CC lib/util/crc64.o 00:36:27.722 CC lib/util/dif.o 00:36:27.722 LIB libspdk_dma.a 00:36:27.722 CC lib/util/fd.o 00:36:27.722 CC lib/vfio_user/host/vfio_user.o 00:36:27.722 CC lib/util/file.o 00:36:27.722 CC lib/util/hexlify.o 00:36:27.722 LIB libspdk_ioat.a 00:36:27.722 CC lib/util/iov.o 00:36:27.722 CC lib/util/math.o 00:36:27.722 CC lib/util/pipe.o 00:36:27.722 CC lib/util/strerror_tls.o 00:36:27.722 CC lib/util/string.o 00:36:27.722 CC lib/util/uuid.o 00:36:27.722 CC lib/util/fd_group.o 00:36:27.722 CC lib/util/xor.o 00:36:27.722 LIB libspdk_vfio_user.a 00:36:27.722 CC lib/util/zipf.o 00:36:27.722 LIB libspdk_util.a 00:36:27.722 CC lib/idxd/idxd.o 00:36:27.722 CC lib/idxd/idxd_user.o 00:36:27.722 CC lib/vmd/vmd.o 00:36:27.722 CC lib/vmd/led.o 00:36:27.722 CC lib/rdma/common.o 00:36:27.722 CC lib/json/json_parse.o 00:36:27.722 CC lib/conf/conf.o 00:36:27.722 CC lib/rdma/rdma_verbs.o 00:36:27.722 CC lib/env_dpdk/env.o 00:36:27.722 LIB libspdk_trace_parser.a 00:36:27.722 CC lib/json/json_util.o 00:36:27.722 CC lib/json/json_write.o 00:36:27.722 CC lib/env_dpdk/memory.o 00:36:27.722 CC lib/env_dpdk/pci.o 00:36:27.722 LIB libspdk_conf.a 00:36:27.722 CC lib/env_dpdk/init.o 00:36:27.722 LIB libspdk_rdma.a 00:36:27.722 CC lib/env_dpdk/threads.o 00:36:27.722 CC lib/env_dpdk/pci_ioat.o 00:36:27.722 CC lib/env_dpdk/pci_virtio.o 00:36:27.722 CC lib/env_dpdk/pci_vmd.o 00:36:27.722 CC lib/env_dpdk/pci_idxd.o 00:36:27.722 CC lib/env_dpdk/pci_event.o 00:36:27.722 LIB libspdk_json.a 00:36:27.722 LIB libspdk_idxd.a 00:36:27.722 LIB libspdk_vmd.a 00:36:27.722 CC lib/env_dpdk/sigbus_handler.o 00:36:27.722 CC lib/env_dpdk/pci_dpdk.o 00:36:27.722 CC lib/env_dpdk/pci_dpdk_2207.o 00:36:27.722 CC lib/env_dpdk/pci_dpdk_2211.o 00:36:27.722 CC lib/jsonrpc/jsonrpc_client.o 00:36:27.722 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:36:27.722 CC lib/jsonrpc/jsonrpc_server.o 00:36:27.723 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:36:27.723 LIB libspdk_jsonrpc.a 00:36:27.723 CC lib/rpc/rpc.o 00:36:27.723 LIB libspdk_env_dpdk.a 00:36:27.723 LIB libspdk_rpc.a 00:36:27.723 CC lib/trace/trace_flags.o 00:36:27.723 CC lib/trace/trace.o 00:36:27.723 CC lib/trace/trace_rpc.o 00:36:27.723 CC lib/notify/notify_rpc.o 00:36:27.723 CC lib/notify/notify.o 00:36:27.723 CC lib/sock/sock.o 00:36:27.723 CC lib/sock/sock_rpc.o 00:36:27.723 LIB libspdk_notify.a 00:36:27.723 LIB libspdk_trace.a 00:36:27.723 LIB libspdk_sock.a 00:36:27.723 CC lib/thread/thread.o 00:36:27.723 CC lib/thread/iobuf.o 00:36:27.723 CC lib/nvme/nvme_ctrlr_cmd.o 00:36:27.723 CC lib/nvme/nvme_fabric.o 00:36:27.723 CC lib/nvme/nvme_ctrlr.o 00:36:27.723 CC lib/nvme/nvme_ns_cmd.o 00:36:27.723 CC lib/nvme/nvme_qpair.o 00:36:27.723 CC lib/nvme/nvme_ns.o 00:36:27.723 CC lib/nvme/nvme_pcie_common.o 00:36:27.723 CC lib/nvme/nvme_pcie.o 00:36:27.723 CC lib/nvme/nvme.o 00:36:27.981 CC lib/nvme/nvme_quirks.o 00:36:27.981 CC lib/nvme/nvme_transport.o 00:36:27.981 CC lib/nvme/nvme_discovery.o 00:36:27.981 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:36:27.981 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:36:28.240 CC lib/nvme/nvme_tcp.o 00:36:28.240 CC lib/nvme/nvme_opal.o 00:36:28.240 CC lib/nvme/nvme_io_msg.o 00:36:28.240 LIB libspdk_thread.a 00:36:28.240 CC lib/nvme/nvme_poll_group.o 00:36:28.500 CC lib/nvme/nvme_zns.o 00:36:28.500 CC lib/accel/accel.o 00:36:28.500 CC lib/accel/accel_rpc.o 00:36:28.500 CC lib/accel/accel_sw.o 00:36:28.500 CC lib/blob/blobstore.o 00:36:28.759 CC lib/init/json_config.o 00:36:28.759 CC lib/init/subsystem.o 00:36:28.759 CC lib/init/subsystem_rpc.o 00:36:28.759 CC lib/nvme/nvme_cuse.o 00:36:28.759 CC lib/nvme/nvme_vfio_user.o 00:36:28.759 CC lib/init/rpc.o 00:36:28.759 CC lib/blob/request.o 00:36:28.759 CC lib/blob/zeroes.o 00:36:28.759 CC lib/virtio/virtio.o 00:36:28.759 CC lib/nvme/nvme_rdma.o 00:36:29.018 LIB libspdk_init.a 00:36:29.018 CC lib/virtio/virtio_vhost_user.o 00:36:29.018 CC lib/blob/blob_bs_dev.o 00:36:29.018 CC lib/virtio/virtio_vfio_user.o 00:36:29.018 CC lib/event/app.o 00:36:29.018 CC lib/event/reactor.o 00:36:29.312 CC lib/event/log_rpc.o 00:36:29.312 CC lib/event/app_rpc.o 00:36:29.312 CC lib/virtio/virtio_pci.o 00:36:29.312 CC lib/event/scheduler_static.o 00:36:29.312 LIB libspdk_accel.a 00:36:29.312 CC lib/bdev/bdev.o 00:36:29.312 CC lib/bdev/bdev_zone.o 00:36:29.312 CC lib/bdev/part.o 00:36:29.312 CC lib/bdev/scsi_nvme.o 00:36:29.312 CC lib/bdev/bdev_rpc.o 00:36:29.312 LIB libspdk_event.a 00:36:29.312 LIB libspdk_virtio.a 00:36:29.571 LIB libspdk_nvme.a 00:36:30.138 LIB libspdk_blob.a 00:36:30.396 CC lib/lvol/lvol.o 00:36:30.396 CC lib/blobfs/blobfs.o 00:36:30.397 CC lib/blobfs/tree.o 00:36:30.964 LIB libspdk_bdev.a 00:36:30.964 LIB libspdk_lvol.a 00:36:30.964 LIB libspdk_blobfs.a 00:36:30.964 CC lib/scsi/dev.o 00:36:30.964 CC lib/nbd/nbd.o 00:36:30.964 CC lib/scsi/lun.o 00:36:30.964 CC lib/scsi/scsi.o 00:36:30.964 CC lib/scsi/scsi_bdev.o 00:36:30.964 CC lib/scsi/port.o 00:36:30.964 CC lib/nbd/nbd_rpc.o 00:36:30.964 CC lib/scsi/scsi_pr.o 00:36:30.964 CC lib/nvmf/ctrlr.o 00:36:30.964 CC lib/ftl/ftl_core.o 00:36:30.964 CC lib/ftl/ftl_init.o 00:36:30.964 CC lib/ftl/ftl_layout.o 00:36:31.223 CC lib/ftl/ftl_debug.o 00:36:31.223 CC lib/ftl/ftl_io.o 00:36:31.223 CC lib/ftl/ftl_sb.o 00:36:31.223 CC lib/scsi/scsi_rpc.o 00:36:31.223 LIB libspdk_nbd.a 00:36:31.223 CC lib/scsi/task.o 00:36:31.223 CC lib/ftl/ftl_l2p_flat.o 00:36:31.223 CC lib/ftl/ftl_l2p.o 00:36:31.223 CC lib/nvmf/ctrlr_discovery.o 00:36:31.223 CC lib/nvmf/ctrlr_bdev.o 00:36:31.223 CC lib/nvmf/subsystem.o 00:36:31.482 CC lib/nvmf/nvmf.o 00:36:31.482 CC lib/ftl/ftl_nv_cache.o 00:36:31.482 CC lib/nvmf/nvmf_rpc.o 00:36:31.482 LIB libspdk_scsi.a 00:36:31.482 CC lib/nvmf/transport.o 00:36:31.482 CC lib/ftl/ftl_band.o 00:36:31.482 CC lib/ftl/ftl_band_ops.o 00:36:31.482 CC lib/ftl/ftl_writer.o 00:36:31.741 CC lib/ftl/ftl_rq.o 00:36:31.741 CC lib/ftl/ftl_reloc.o 00:36:31.741 CC lib/ftl/ftl_l2p_cache.o 00:36:31.741 CC lib/ftl/ftl_p2l.o 00:36:31.741 CC lib/ftl/mngt/ftl_mngt.o 00:36:31.741 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:36:31.741 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:36:31.741 CC lib/ftl/mngt/ftl_mngt_startup.o 00:36:31.741 CC lib/ftl/mngt/ftl_mngt_md.o 00:36:31.999 CC lib/ftl/mngt/ftl_mngt_misc.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:36:32.000 CC lib/nvmf/tcp.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:36:32.000 CC lib/nvmf/rdma.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_band.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:36:32.000 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:36:32.000 CC lib/ftl/utils/ftl_conf.o 00:36:32.269 CC lib/ftl/utils/ftl_md.o 00:36:32.269 CC lib/ftl/utils/ftl_mempool.o 00:36:32.269 CC lib/iscsi/conn.o 00:36:32.269 CC lib/ftl/utils/ftl_bitmap.o 00:36:32.269 CC lib/iscsi/init_grp.o 00:36:32.269 CC lib/vhost/vhost.o 00:36:32.269 CC lib/ftl/utils/ftl_property.o 00:36:32.269 CC lib/iscsi/iscsi.o 00:36:32.269 CC lib/iscsi/md5.o 00:36:32.269 CC lib/iscsi/param.o 00:36:32.542 CC lib/iscsi/portal_grp.o 00:36:32.542 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:36:32.542 CC lib/iscsi/tgt_node.o 00:36:32.542 CC lib/iscsi/iscsi_subsystem.o 00:36:32.542 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:36:32.542 CC lib/iscsi/iscsi_rpc.o 00:36:32.542 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:36:32.542 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:36:32.801 CC lib/vhost/vhost_rpc.o 00:36:32.801 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:36:32.801 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:36:32.801 CC lib/ftl/upgrade/ftl_sb_v3.o 00:36:32.801 CC lib/ftl/upgrade/ftl_sb_v5.o 00:36:32.801 CC lib/ftl/nvc/ftl_nvc_dev.o 00:36:32.801 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:36:32.801 CC lib/ftl/base/ftl_base_dev.o 00:36:32.801 CC lib/ftl/base/ftl_base_bdev.o 00:36:32.801 CC lib/iscsi/task.o 00:36:33.060 LIB libspdk_nvmf.a 00:36:33.060 CC lib/vhost/vhost_scsi.o 00:36:33.060 CC lib/vhost/vhost_blk.o 00:36:33.060 CC lib/vhost/rte_vhost_user.o 00:36:33.060 LIB libspdk_ftl.a 00:36:33.060 LIB libspdk_iscsi.a 00:36:33.627 LIB libspdk_vhost.a 00:36:33.885 CC module/env_dpdk/env_dpdk_rpc.o 00:36:33.885 CC module/scheduler/gscheduler/gscheduler.o 00:36:33.885 CC module/accel/dsa/accel_dsa.o 00:36:33.886 CC module/accel/ioat/accel_ioat.o 00:36:33.886 CC module/scheduler/dynamic/scheduler_dynamic.o 00:36:33.886 CC module/sock/posix/posix.o 00:36:33.886 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:36:33.886 CC module/blob/bdev/blob_bdev.o 00:36:33.886 CC module/accel/error/accel_error.o 00:36:33.886 CC module/accel/iaa/accel_iaa.o 00:36:33.886 LIB libspdk_env_dpdk_rpc.a 00:36:33.886 CC module/accel/iaa/accel_iaa_rpc.o 00:36:33.886 LIB libspdk_scheduler_gscheduler.a 00:36:33.886 LIB libspdk_scheduler_dpdk_governor.a 00:36:33.886 CC module/accel/ioat/accel_ioat_rpc.o 00:36:33.886 CC module/accel/error/accel_error_rpc.o 00:36:33.886 LIB libspdk_scheduler_dynamic.a 00:36:34.144 CC module/accel/dsa/accel_dsa_rpc.o 00:36:34.144 LIB libspdk_blob_bdev.a 00:36:34.144 LIB libspdk_accel_iaa.a 00:36:34.144 LIB libspdk_accel_ioat.a 00:36:34.144 LIB libspdk_accel_error.a 00:36:34.144 LIB libspdk_accel_dsa.a 00:36:34.144 CC module/blobfs/bdev/blobfs_bdev.o 00:36:34.144 CC module/bdev/gpt/gpt.o 00:36:34.144 CC module/bdev/delay/vbdev_delay.o 00:36:34.144 CC module/bdev/malloc/bdev_malloc.o 00:36:34.144 CC module/bdev/error/vbdev_error.o 00:36:34.144 CC module/bdev/lvol/vbdev_lvol.o 00:36:34.144 CC module/bdev/null/bdev_null.o 00:36:34.144 CC module/bdev/nvme/bdev_nvme.o 00:36:34.401 CC module/bdev/passthru/vbdev_passthru.o 00:36:34.401 LIB libspdk_sock_posix.a 00:36:34.401 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:36:34.401 CC module/bdev/gpt/vbdev_gpt.o 00:36:34.401 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:36:34.401 CC module/bdev/error/vbdev_error_rpc.o 00:36:34.401 CC module/bdev/null/bdev_null_rpc.o 00:36:34.401 CC module/bdev/delay/vbdev_delay_rpc.o 00:36:34.401 CC module/bdev/malloc/bdev_malloc_rpc.o 00:36:34.401 LIB libspdk_blobfs_bdev.a 00:36:34.401 CC module/bdev/nvme/bdev_nvme_rpc.o 00:36:34.658 LIB libspdk_bdev_error.a 00:36:34.658 CC module/bdev/nvme/nvme_rpc.o 00:36:34.658 LIB libspdk_bdev_passthru.a 00:36:34.658 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:36:34.658 LIB libspdk_bdev_null.a 00:36:34.658 LIB libspdk_bdev_gpt.a 00:36:34.658 CC module/bdev/nvme/bdev_mdns_client.o 00:36:34.658 CC module/bdev/nvme/vbdev_opal.o 00:36:34.658 LIB libspdk_bdev_delay.a 00:36:34.658 LIB libspdk_bdev_malloc.a 00:36:34.658 CC module/bdev/raid/bdev_raid.o 00:36:34.658 CC module/bdev/nvme/vbdev_opal_rpc.o 00:36:34.658 CC module/bdev/split/vbdev_split.o 00:36:34.658 CC module/bdev/zone_block/vbdev_zone_block.o 00:36:34.658 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:36:34.916 LIB libspdk_bdev_lvol.a 00:36:34.916 CC module/bdev/aio/bdev_aio.o 00:36:34.916 CC module/bdev/split/vbdev_split_rpc.o 00:36:34.916 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:36:34.916 CC module/bdev/raid/bdev_raid_rpc.o 00:36:34.916 CC module/bdev/ftl/bdev_ftl.o 00:36:34.916 CC module/bdev/virtio/bdev_virtio_scsi.o 00:36:34.916 CC module/bdev/iscsi/bdev_iscsi.o 00:36:34.916 LIB libspdk_bdev_split.a 00:36:34.916 LIB libspdk_bdev_zone_block.a 00:36:35.173 CC module/bdev/ftl/bdev_ftl_rpc.o 00:36:35.173 CC module/bdev/aio/bdev_aio_rpc.o 00:36:35.173 CC module/bdev/raid/bdev_raid_sb.o 00:36:35.173 CC module/bdev/raid/raid0.o 00:36:35.173 CC module/bdev/raid/raid1.o 00:36:35.173 CC module/bdev/raid/concat.o 00:36:35.173 LIB libspdk_bdev_aio.a 00:36:35.173 CC module/bdev/raid/raid5f.o 00:36:35.173 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:36:35.173 LIB libspdk_bdev_ftl.a 00:36:35.173 CC module/bdev/virtio/bdev_virtio_blk.o 00:36:35.173 CC module/bdev/virtio/bdev_virtio_rpc.o 00:36:35.431 LIB libspdk_bdev_iscsi.a 00:36:35.431 LIB libspdk_bdev_virtio.a 00:36:35.431 LIB libspdk_bdev_nvme.a 00:36:35.431 LIB libspdk_bdev_raid.a 00:36:35.689 CC module/event/subsystems/iobuf/iobuf.o 00:36:35.689 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:36:35.689 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:36:35.689 CC module/event/subsystems/sock/sock.o 00:36:35.689 CC module/event/subsystems/scheduler/scheduler.o 00:36:35.689 CC module/event/subsystems/vmd/vmd.o 00:36:35.689 CC module/event/subsystems/vmd/vmd_rpc.o 00:36:35.947 LIB libspdk_event_vhost_blk.a 00:36:35.947 LIB libspdk_event_sock.a 00:36:35.947 LIB libspdk_event_vmd.a 00:36:35.947 LIB libspdk_event_scheduler.a 00:36:35.947 LIB libspdk_event_iobuf.a 00:36:35.947 CC module/event/subsystems/accel/accel.o 00:36:36.205 LIB libspdk_event_accel.a 00:36:36.205 CC module/event/subsystems/bdev/bdev.o 00:36:36.462 LIB libspdk_event_bdev.a 00:36:36.462 CC module/event/subsystems/nbd/nbd.o 00:36:36.462 CC module/event/subsystems/scsi/scsi.o 00:36:36.462 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:36:36.462 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:36:36.720 LIB libspdk_event_nbd.a 00:36:36.720 LIB libspdk_event_scsi.a 00:36:36.720 LIB libspdk_event_nvmf.a 00:36:36.720 CC module/event/subsystems/iscsi/iscsi.o 00:36:36.720 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:36:36.978 LIB libspdk_event_vhost_scsi.a 00:36:36.978 LIB libspdk_event_iscsi.a 00:36:36.978 CC app/trace_record/trace_record.o 00:36:36.978 CC app/spdk_lspci/spdk_lspci.o 00:36:37.236 CXX app/trace/trace.o 00:36:37.236 CC app/nvmf_tgt/nvmf_main.o 00:36:37.236 CC app/iscsi_tgt/iscsi_tgt.o 00:36:37.236 CC examples/accel/perf/accel_perf.o 00:36:37.236 CC app/spdk_tgt/spdk_tgt.o 00:36:37.236 CC test/accel/dif/dif.o 00:36:37.236 CC examples/bdev/hello_world/hello_bdev.o 00:36:37.236 CC examples/blob/hello_world/hello_blob.o 00:36:37.236 LINK spdk_lspci 00:36:37.236 LINK spdk_trace_record 00:36:37.495 LINK nvmf_tgt 00:36:37.495 LINK iscsi_tgt 00:36:37.495 LINK spdk_tgt 00:36:37.495 LINK hello_blob 00:36:37.495 LINK hello_bdev 00:36:37.495 LINK dif 00:36:37.495 LINK accel_perf 00:36:37.753 LINK spdk_trace 00:36:47.726 CC examples/blob/cli/blobcli.o 00:36:49.644 LINK blobcli 00:36:56.213 CC examples/ioat/perf/perf.o 00:36:56.471 LINK ioat_perf 00:37:28.573 CC app/spdk_nvme_perf/perf.o 00:37:28.573 CC app/spdk_nvme_identify/identify.o 00:37:29.945 LINK spdk_nvme_perf 00:37:32.474 LINK spdk_nvme_identify 00:37:36.658 CC examples/ioat/verify/verify.o 00:37:38.033 LINK verify 00:38:34.261 CC examples/nvme/hello_world/hello_world.o 00:38:34.261 LINK hello_world 00:39:06.332 CC examples/nvme/reconnect/reconnect.o 00:39:06.332 LINK reconnect 00:39:06.332 CC examples/nvme/nvme_manage/nvme_manage.o 00:39:08.864 LINK nvme_manage 00:39:35.404 CC examples/bdev/bdevperf/bdevperf.o 00:39:36.339 CC examples/sock/hello_world/hello_sock.o 00:39:37.286 LINK bdevperf 00:39:37.855 LINK hello_sock 00:39:39.231 CC test/app/bdev_svc/bdev_svc.o 00:39:40.166 LINK bdev_svc 00:39:41.102 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:39:43.003 LINK nvme_fuzz 00:39:51.116 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:39:54.404 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:39:55.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:39:56.711 LINK vhost_fuzz 00:39:56.969 LINK iscsi_fuzz 00:39:57.904 CC test/app/histogram_perf/histogram_perf.o 00:39:58.839 LINK histogram_perf 00:39:58.839 CC test/app/jsoncat/jsoncat.o 00:39:59.773 LINK jsoncat 00:40:06.329 CC examples/nvme/arbitration/arbitration.o 00:40:09.607 LINK arbitration 00:40:21.808 CC examples/nvme/hotplug/hotplug.o 00:40:22.375 LINK hotplug 00:40:28.938 CC app/spdk_nvme_discover/discovery_aer.o 00:40:28.938 CC test/app/stub/stub.o 00:40:29.504 LINK stub 00:40:29.762 LINK spdk_nvme_discover 00:40:31.660 CC app/spdk_top/spdk_top.o 00:40:35.843 LINK spdk_top 00:40:42.459 CC examples/vmd/lsvmd/lsvmd.o 00:40:42.717 CC examples/nvmf/nvmf/nvmf.o 00:40:43.652 LINK lsvmd 00:40:45.050 LINK nvmf 00:40:55.017 CC examples/vmd/led/led.o 00:40:55.582 LINK led 00:41:07.784 CC app/vhost/vhost.o 00:41:08.349 LINK vhost 00:41:14.940 CC app/spdk_dd/spdk_dd.o 00:41:16.837 LINK spdk_dd 00:41:21.025 CC examples/nvme/cmb_copy/cmb_copy.o 00:41:22.400 LINK cmb_copy 00:41:28.952 CC examples/util/zipf/zipf.o 00:41:28.952 LINK zipf 00:41:37.065 CC examples/thread/thread/thread_ex.o 00:41:38.001 LINK thread 00:41:38.567 CC examples/idxd/perf/perf.o 00:41:40.471 LINK idxd_perf 00:41:50.487 CC test/bdev/bdevio/bdevio.o 00:41:51.864 LINK bdevio 00:42:00.004 CC examples/interrupt_tgt/interrupt_tgt.o 00:42:00.262 LINK interrupt_tgt 00:42:12.456 CC test/blobfs/mkfs/mkfs.o 00:42:13.022 LINK mkfs 00:42:13.985 CC app/fio/nvme/fio_plugin.o 00:42:15.886 CC examples/nvme/abort/abort.o 00:42:16.837 LINK spdk_nvme 00:42:17.413 LINK abort 00:42:17.979 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:42:19.353 LINK pmr_persistence 00:43:40.785 CC app/fio/bdev/fio_plugin.o 00:43:40.785 TEST_HEADER include/spdk/config.h 00:43:40.785 CXX test/cpp_headers/accel_module.o 00:43:40.785 CXX test/cpp_headers/bit_pool.o 00:43:40.785 LINK spdk_bdev 00:43:40.785 CXX test/cpp_headers/ioat.o 00:43:40.785 CXX test/cpp_headers/blobfs.o 00:43:40.785 CXX test/cpp_headers/notify.o 00:43:40.785 CXX test/cpp_headers/pipe.o 00:43:41.352 CXX test/cpp_headers/accel.o 00:43:42.729 CXX test/cpp_headers/file.o 00:43:44.107 CXX test/cpp_headers/version.o 00:43:44.374 CXX test/cpp_headers/trace_parser.o 00:43:46.274 CXX test/cpp_headers/opal_spec.o 00:43:47.648 CXX test/cpp_headers/uuid.o 00:43:49.022 CXX test/cpp_headers/likely.o 00:43:50.396 CXX test/cpp_headers/dif.o 00:43:51.336 CXX test/cpp_headers/memory.o 00:43:52.321 CXX test/cpp_headers/vfio_user_pci.o 00:43:53.697 CXX test/cpp_headers/dma.o 00:43:54.632 CXX test/cpp_headers/nbd.o 00:43:54.890 CXX test/cpp_headers/conf.o 00:43:56.266 CXX test/cpp_headers/env_dpdk.o 00:43:57.639 CXX test/cpp_headers/nvmf_spec.o 00:43:59.014 CXX test/cpp_headers/iscsi_spec.o 00:43:59.273 CC test/dma/test_dma/test_dma.o 00:44:00.209 CXX test/cpp_headers/mmio.o 00:44:00.776 CXX test/cpp_headers/json.o 00:44:01.034 LINK test_dma 00:44:01.970 CXX test/cpp_headers/opal.o 00:44:02.908 CXX test/cpp_headers/bdev.o 00:44:03.473 CC test/env/mem_callbacks/mem_callbacks.o 00:44:04.405 CXX test/cpp_headers/base64.o 00:44:04.969 CXX test/cpp_headers/blobfs_bdev.o 00:44:06.339 LINK mem_callbacks 00:44:06.339 CXX test/cpp_headers/nvme_ocssd.o 00:44:06.904 CXX test/cpp_headers/fd.o 00:44:07.838 CC test/env/vtophys/vtophys.o 00:44:08.095 CXX test/cpp_headers/barrier.o 00:44:08.661 LINK vtophys 00:44:08.920 CXX test/cpp_headers/scsi_spec.o 00:44:09.854 CXX test/cpp_headers/zipf.o 00:44:10.787 CXX test/cpp_headers/nvmf.o 00:44:12.160 CXX test/cpp_headers/queue.o 00:44:12.418 CXX test/cpp_headers/xor.o 00:44:13.351 CXX test/cpp_headers/cpuset.o 00:44:13.918 CXX test/cpp_headers/thread.o 00:44:14.850 CXX test/cpp_headers/bdev_zone.o 00:44:15.415 CC test/event/event_perf/event_perf.o 00:44:15.981 LINK event_perf 00:44:16.240 CXX test/cpp_headers/fd_group.o 00:44:17.176 CXX test/cpp_headers/tree.o 00:44:17.435 CXX test/cpp_headers/blob_bdev.o 00:44:18.812 CXX test/cpp_headers/crc64.o 00:44:19.748 CXX test/cpp_headers/assert.o 00:44:20.683 CXX test/cpp_headers/nvme_spec.o 00:44:22.059 CXX test/cpp_headers/endian.o 00:44:22.059 CC test/lvol/esnap/esnap.o 00:44:22.996 CXX test/cpp_headers/pci_ids.o 00:44:23.932 CXX test/cpp_headers/log.o 00:44:24.867 CXX test/cpp_headers/nvme_ocssd_spec.o 00:44:26.255 CXX test/cpp_headers/ftl.o 00:44:27.206 CXX test/cpp_headers/config.o 00:44:27.464 CXX test/cpp_headers/vhost.o 00:44:28.401 CXX test/cpp_headers/bdev_module.o 00:44:29.336 CXX test/cpp_headers/nvme_intel.o 00:44:30.272 CXX test/cpp_headers/idxd_spec.o 00:44:31.208 CXX test/cpp_headers/crc16.o 00:44:32.143 CXX test/cpp_headers/nvme.o 00:44:32.709 CC test/nvme/aer/aer.o 00:44:33.275 CXX test/cpp_headers/stdinc.o 00:44:33.534 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:44:34.101 CC test/env/memory/memory_ut.o 00:44:34.101 LINK aer 00:44:34.101 CXX test/cpp_headers/scsi.o 00:44:34.359 LINK env_dpdk_post_init 00:44:35.292 CXX test/cpp_headers/nvmf_fc_spec.o 00:44:36.226 CXX test/cpp_headers/idxd.o 00:44:36.226 LINK esnap 00:44:36.791 CXX test/cpp_headers/hexlify.o 00:44:37.048 LINK memory_ut 00:44:38.007 CXX test/cpp_headers/reduce.o 00:44:38.575 CXX test/cpp_headers/crc32.o 00:44:39.948 CXX test/cpp_headers/init.o 00:44:40.514 CXX test/cpp_headers/nvmf_transport.o 00:44:42.412 CXX test/cpp_headers/nvme_zns.o 00:44:43.345 CXX test/cpp_headers/vfio_user_spec.o 00:44:44.277 CXX test/cpp_headers/util.o 00:44:45.213 CXX test/cpp_headers/jsonrpc.o 00:44:46.583 CXX test/cpp_headers/env.o 00:44:47.147 CC test/env/pci/pci_ut.o 00:44:47.404 CXX test/cpp_headers/nvmf_cmd.o 00:44:48.338 CXX test/cpp_headers/lvol.o 00:44:48.904 LINK pci_ut 00:44:49.162 CXX test/cpp_headers/histogram_data.o 00:44:50.537 CXX test/cpp_headers/event.o 00:44:51.103 CXX test/cpp_headers/trace.o 00:44:51.361 CC test/event/reactor/reactor.o 00:44:51.927 CC test/event/reactor_perf/reactor_perf.o 00:44:51.927 CXX test/cpp_headers/ioat_spec.o 00:44:52.186 LINK reactor 00:44:52.750 LINK reactor_perf 00:44:53.007 CXX test/cpp_headers/string.o 00:44:53.968 CXX test/cpp_headers/ublk.o 00:44:54.901 CXX test/cpp_headers/bit_array.o 00:44:56.273 CXX test/cpp_headers/scheduler.o 00:44:57.206 CXX test/cpp_headers/blob.o 00:44:58.582 CXX test/cpp_headers/gpt_spec.o 00:44:59.958 CXX test/cpp_headers/sock.o 00:45:01.343 CXX test/cpp_headers/vmd.o 00:45:02.718 CXX test/cpp_headers/rpc.o 00:45:04.092 CC test/event/app_repeat/app_repeat.o 00:45:05.466 LINK app_repeat 00:45:12.022 CC test/rpc_client/rpc_client_test.o 00:45:12.022 LINK rpc_client_test 00:45:14.553 CC test/thread/poller_perf/poller_perf.o 00:45:15.486 LINK poller_perf 00:45:20.753 CC test/event/scheduler/scheduler.o 00:45:21.687 LINK scheduler 00:45:24.216 CC test/nvme/reset/reset.o 00:45:25.587 LINK reset 00:45:30.855 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:45:31.790 LINK histogram_ut 00:45:34.360 CC test/nvme/sgl/sgl.o 00:45:34.929 CC test/unit/lib/accel/accel.c/accel_ut.o 00:45:35.494 LINK sgl 00:45:36.427 CC test/nvme/e2edp/nvme_dp.o 00:45:37.801 LINK nvme_dp 00:45:43.068 LINK accel_ut 00:45:44.003 CC test/nvme/overhead/overhead.o 00:45:45.913 LINK overhead 00:45:52.475 CC test/thread/lock/spdk_lock.o 00:45:57.741 LINK spdk_lock 00:46:04.311 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:46:04.311 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:46:06.214 LINK blob_bdev_ut 00:46:08.756 CC test/unit/lib/blob/blob.c/blob_ut.o 00:46:16.867 CC test/nvme/err_injection/err_injection.o 00:46:17.801 LINK err_injection 00:46:18.058 CC test/nvme/startup/startup.o 00:46:18.990 LINK startup 00:46:20.921 LINK bdev_ut 00:46:25.111 CC test/unit/lib/bdev/part.c/part_ut.o 00:46:25.369 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:46:26.303 LINK scsi_nvme_ut 00:46:27.238 LINK blob_ut 00:46:31.426 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:46:31.685 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:46:33.587 LINK gpt_ut 00:46:34.962 LINK part_ut 00:46:35.897 LINK vbdev_lvol_ut 00:46:36.465 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:46:44.585 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:46:45.959 LINK bdev_ut 00:46:46.526 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:46:47.094 CC test/unit/lib/event/app.c/app_ut.o 00:46:47.094 CC test/unit/lib/dma/dma.c/dma_ut.o 00:46:47.094 LINK tree_ut 00:46:47.353 LINK bdev_raid_ut 00:46:47.922 LINK dma_ut 00:46:48.857 LINK app_ut 00:46:50.758 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:46:52.661 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:46:53.228 LINK bdev_zone_ut 00:46:53.228 LINK blobfs_async_ut 00:46:54.166 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:46:54.424 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:46:55.359 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:46:55.617 CC test/nvme/reserve/reserve.o 00:46:55.617 LINK reactor_ut 00:46:55.876 LINK bdev_raid_sb_ut 00:46:56.135 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:46:56.135 LINK reserve 00:46:56.398 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:46:56.398 LINK blobfs_sync_ut 00:46:57.370 LINK vbdev_zone_block_ut 00:46:58.747 CC test/nvme/simple_copy/simple_copy.o 00:46:59.315 LINK simple_copy 00:46:59.883 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:47:01.256 LINK concat_ut 00:47:02.189 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:47:02.189 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:47:03.564 LINK raid1_ut 00:47:04.938 LINK raid5f_ut 00:47:04.938 LINK bdev_nvme_ut 00:47:06.313 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:47:06.880 LINK blobfs_bdev_ut 00:47:07.815 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:47:09.282 LINK ioat_ut 00:47:09.864 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:47:10.122 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:47:11.058 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:47:11.058 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:47:11.626 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:47:11.626 LINK conn_ut 00:47:11.885 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:47:12.144 LINK json_util_ut 00:47:12.144 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:47:12.711 LINK init_grp_ut 00:47:12.711 LINK jsonrpc_server_ut 00:47:13.300 LINK json_write_ut 00:47:13.301 LINK json_parse_ut 00:47:17.495 CC test/unit/lib/iscsi/param.c/param_ut.o 00:47:17.495 LINK iscsi_ut 00:47:17.495 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:47:18.064 LINK param_ut 00:47:18.064 LINK portal_grp_ut 00:47:18.323 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:47:19.702 CC test/nvme/connect_stress/connect_stress.o 00:47:19.702 CC test/nvme/boot_partition/boot_partition.o 00:47:19.961 LINK tgt_node_ut 00:47:19.961 LINK connect_stress 00:47:20.220 CC test/unit/lib/log/log.c/log_ut.o 00:47:20.220 LINK boot_partition 00:47:20.789 LINK log_ut 00:47:22.166 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:47:22.734 CC test/nvme/fused_ordering/fused_ordering.o 00:47:22.734 CC test/nvme/compliance/nvme_compliance.o 00:47:22.992 LINK fused_ordering 00:47:23.250 CC test/nvme/doorbell_aers/doorbell_aers.o 00:47:23.509 LINK nvme_compliance 00:47:23.768 LINK doorbell_aers 00:47:23.768 CC test/nvme/fdp/fdp.o 00:47:24.027 CC test/nvme/cuse/cuse.o 00:47:24.027 LINK lvol_ut 00:47:24.595 LINK fdp 00:47:24.854 CC test/unit/lib/notify/notify.c/notify_ut.o 00:47:25.421 LINK notify_ut 00:47:25.988 LINK cuse 00:47:28.520 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:47:33.791 LINK nvme_ut 00:47:34.050 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:47:35.959 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:47:41.234 LINK nvme_ctrlr_cmd_ut 00:47:44.519 LINK nvme_ctrlr_ut 00:47:45.894 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:47:50.081 LINK nvme_ctrlr_ocssd_cmd_ut 00:47:54.275 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:47:54.532 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:47:55.906 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:47:56.164 LINK nvme_ns_ut 00:47:57.098 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:47:57.098 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:47:59.000 LINK nvme_ns_cmd_ut 00:47:59.000 LINK nvme_ns_ocssd_cmd_ut 00:47:59.259 LINK nvme_pcie_ut 00:47:59.827 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:47:59.828 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:48:00.087 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:48:00.347 LINK tcp_ut 00:48:00.347 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:48:00.917 LINK ctrlr_bdev_ut 00:48:01.177 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:48:01.177 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:48:01.436 LINK ctrlr_discovery_ut 00:48:01.436 LINK ctrlr_ut 00:48:02.005 LINK subsystem_ut 00:48:02.005 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:48:02.573 LINK nvme_poll_group_ut 00:48:02.573 LINK nvmf_ut 00:48:02.573 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:48:04.485 LINK nvme_quirks_ut 00:48:04.485 LINK nvme_qpair_ut 00:48:06.422 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:48:06.422 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:48:06.422 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:48:06.691 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:48:06.950 LINK dev_ut 00:48:07.208 CC test/unit/lib/sock/sock.c/sock_ut.o 00:48:07.208 CC test/unit/lib/util/base64.c/base64_ut.o 00:48:07.208 CC test/unit/lib/thread/thread.c/thread_ut.o 00:48:07.208 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:48:07.208 LINK base64_ut 00:48:07.775 LINK iobuf_ut 00:48:07.775 LINK nvme_tcp_ut 00:48:07.775 LINK transport_ut 00:48:07.775 LINK rdma_ut 00:48:07.775 LINK sock_ut 00:48:07.775 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:48:08.714 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:48:08.714 LINK bit_array_ut 00:48:08.714 LINK thread_ut 00:48:10.092 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:48:11.032 LINK nvme_transport_ut 00:48:11.292 LINK lun_ut 00:48:12.228 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:48:13.165 LINK cpuset_ut 00:48:13.165 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:48:13.732 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:48:13.732 CC test/unit/lib/sock/posix.c/posix_ut.o 00:48:13.732 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:48:13.732 LINK crc16_ut 00:48:13.992 LINK nvme_io_msg_ut 00:48:13.992 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:48:14.251 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:48:14.252 LINK posix_ut 00:48:14.252 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:48:14.252 LINK pci_event_ut 00:48:14.252 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:48:14.252 LINK crc32_ieee_ut 00:48:14.252 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:48:14.252 LINK scsi_ut 00:48:14.510 LINK crc32c_ut 00:48:14.510 LINK crc64_ut 00:48:14.510 LINK nvme_pcie_common_ut 00:48:14.767 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:48:15.026 CC test/unit/lib/util/dif.c/dif_ut.o 00:48:15.026 CC test/unit/lib/util/iov.c/iov_ut.o 00:48:15.026 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:48:15.026 CC test/unit/lib/util/math.c/math_ut.o 00:48:15.026 LINK subsystem_ut 00:48:15.026 LINK iov_ut 00:48:15.026 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:48:15.284 LINK math_ut 00:48:15.543 LINK scsi_bdev_ut 00:48:15.543 LINK scsi_pr_ut 00:48:15.801 LINK dif_ut 00:48:16.060 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:48:16.318 CC test/unit/lib/util/string.c/string_ut.o 00:48:16.577 LINK pipe_ut 00:48:16.577 LINK string_ut 00:48:16.577 CC test/unit/lib/util/xor.c/xor_ut.o 00:48:16.835 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:48:16.835 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:48:17.095 LINK xor_ut 00:48:17.095 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:48:17.095 LINK rpc_ut 00:48:17.353 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:48:17.353 LINK idxd_user_ut 00:48:17.612 LINK nvme_fabric_ut 00:48:17.612 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:48:17.872 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:48:17.872 LINK idxd_ut 00:48:17.872 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:48:18.131 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:48:18.131 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:48:18.131 LINK nvme_opal_ut 00:48:18.131 CC test/unit/lib/rdma/common.c/common_ut.o 00:48:18.390 LINK ftl_l2p_ut 00:48:18.390 LINK common_ut 00:48:18.390 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:48:18.649 LINK nvme_cuse_ut 00:48:18.649 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:48:18.908 LINK nvme_rdma_ut 00:48:18.908 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:48:18.908 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:48:18.908 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:48:19.167 LINK ftl_io_ut 00:48:19.167 LINK ftl_bitmap_ut 00:48:19.167 LINK ftl_band_ut 00:48:19.167 LINK vhost_ut 00:48:19.167 LINK ftl_mempool_ut 00:48:19.427 LINK ftl_mngt_ut 00:48:19.686 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:48:20.623 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:48:21.189 LINK ftl_sb_ut 00:48:21.448 LINK ftl_layout_upgrade_ut 00:49:29.176 22:57:19 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:49:29.176 make[1]: Nothing to be done for 'clean'. 00:49:29.176 22:57:23 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:49:29.176 22:57:23 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:49:29.176 22:57:23 -- common/autotest_common.sh@10 -- $ set +x 00:49:29.176 22:57:23 -- spdk/autopackage.sh@48 -- $ timing_finish 00:49:29.176 22:57:23 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:29.176 22:57:23 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:49:29.176 22:57:23 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:29.176 + [[ -n 2352 ]] 00:49:29.176 + sudo kill 2352 00:49:29.176 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:49:29.198 [Pipeline] } 00:49:29.219 [Pipeline] // timeout 00:49:29.225 [Pipeline] } 00:49:29.243 [Pipeline] // stage 00:49:29.249 [Pipeline] } 00:49:29.267 [Pipeline] // catchError 00:49:29.277 [Pipeline] stage 00:49:29.280 [Pipeline] { (Stop VM) 00:49:29.292 [Pipeline] sh 00:49:29.572 + vagrant halt 00:49:32.103 ==> default: Halting domain... 00:49:42.087 [Pipeline] sh 00:49:42.365 + vagrant destroy -f 00:49:44.896 ==> default: Removing domain... 00:49:46.288 [Pipeline] sh 00:49:46.568 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 00:49:46.577 [Pipeline] } 00:49:46.595 [Pipeline] // stage 00:49:46.601 [Pipeline] } 00:49:46.618 [Pipeline] // dir 00:49:46.624 [Pipeline] } 00:49:46.641 [Pipeline] // wrap 00:49:46.648 [Pipeline] } 00:49:46.663 [Pipeline] // catchError 00:49:46.673 [Pipeline] stage 00:49:46.675 [Pipeline] { (Epilogue) 00:49:46.691 [Pipeline] sh 00:49:46.971 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:01.866 [Pipeline] catchError 00:50:01.869 [Pipeline] { 00:50:01.884 [Pipeline] sh 00:50:02.164 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:02.164 Artifacts sizes are good 00:50:02.173 [Pipeline] } 00:50:02.191 [Pipeline] // catchError 00:50:02.202 [Pipeline] archiveArtifacts 00:50:02.210 Archiving artifacts 00:50:02.552 [Pipeline] cleanWs 00:50:02.565 [WS-CLEANUP] Deleting project workspace... 00:50:02.565 [WS-CLEANUP] Deferred wipeout is used... 00:50:02.593 [WS-CLEANUP] done 00:50:02.594 [Pipeline] } 00:50:02.612 [Pipeline] // stage 00:50:02.617 [Pipeline] } 00:50:02.632 [Pipeline] // node 00:50:02.637 [Pipeline] End of Pipeline 00:50:02.667 Finished: SUCCESS